“Why I am Still Skeptical about AGI by 2030” by James Fodor 55u34

23/05/2025

Introduction I have been writing posts critical of mainstream EA narratives about AI...

Introduction
I have been writing posts critical of mainstream EA narratives about AI capabilities and timelines for many years now. Compared to the situation when I wrote my posts in 2018 or 2020, LLMs now dominate the discussion, and timelines have also shrunk enormously. The ‘mainstream view’ within EA now appears to be that human-level AI will be arriving by 2030, even as early as 2027. This view has been articulated by 80,000 Hours, on the forum (though see this excellent piece excellent piece arguing against short timelines), and in the highly engaging science fiction scenario of AI 2027. While my article piece is directed generally against all such short-horizon views, I will focus on responding to relevant portions of the article ‘Preparing for the Intelligence Explosion’ by Will MacAskill and Fin Moorhouse.
Rates of Growth
The authors summarise their argument as follows:
Currently, total global research effort [...]
---
Outline:
(00:11) Introduction
(01:05) Rates of Growth
(04:55) The Limitations of Benchmarks
(09:26) Real-World Adoption
(11:31) Conclusion
---
First published:
May 2nd, 2025
Source:
https://forum.effectivealtruism.org/posts/meNrhbgM3NwqAufwj/why-i-am-still-skeptical-about-agi-by-2030
--- Narrated by TYPE III AUDIO.

“Revamped effectivealtruism.org” by Agnes Stenlund 16 días 06:20 “Estimating the Substitutability between Compute and Cognitive Labor in AI Research” by Parker Whitfill, Che 6 días 20:24 “The Importance of Blasting Good Ideas Into The Ether” by Bentham’s Bulldog 8 días 11:33 “Positive effects of EA on mental health” by Julia Wise🔸, Catherine Low🔸, Charlotte Darnell 8 días 08:46 “Rescaling and The Easterlin Paradox (2.0)” by Charlie Harrison 9 días 14:47 Ver más en APP Comentarios del episodio 1l3w41