Trending Misterio
iVoox
Descargar app Subir
iVoox Podcast & radio
Descargar app gratis
The Future of Life
The Future of Life
Podcast

The Future of Life 6n4w5l

332
36

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles. s82t

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.

The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.

FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

332
36
Facing Superintelligence (with Ben Goertzel)
Facing Superintelligence (with Ben Goertzel)
On this episode, Ben Goertzel s me to discuss what distinguishes the current AI boom from previous ones, important but overlooked AI research, simplicity versus complexity in the first AGI, the feasibility of alignment, benchmarks and economic impact, potential bottlenecks to superintelligence, and what humanity should do moving forward.    Timestamps:   00:00:00 Preview and intro   00:01:59 Thinking about AGI in the 1970s   00:07:28 What's different about this AI boom?   00:16:10 Former taboos about AGI  00:19:53 AI research worth revisiting   00:35:53 Will the first AGI be simple?   00:48:49 Is alignment achievable?   01:02:40 Benchmarks and economic impact   01:15:23 Bottlenecks to superintelligence  01:23:09 What should we do?
Internet y tecnología 1 semana
0
0
7
01:32:33
Will Future AIs Be Conscious? (with Jeff Sebo)
Will Future AIs Be Conscious? (with Jeff Sebo)
On this episode, Jeff Sebo s me to discuss artificial consciousness, substrate-independence, possible tensions between AI risk and AI consciousness, the relationship between consciousness and cognitive complexity, and how intuitive versus intellectual approaches guide our understanding of these topics. We also discuss AI companions, AI rights, and how we might measure consciousness effectively.   You can follow Jeff’s work here: https://jeffsebo.net/   Timestamps:   00:00:00 Preview and intro  00:02:56 Imagining artificial consciousness   00:07:51 Substrate-independence?  00:11:26 Are we making progress?   00:18:03 Intuitions about explanations   00:24:43 AI risk and AI consciousness   00:40:01 Consciousness and cognitive complexity   00:51:20 Intuition versus intellect  00:58:48 AIs as companions   01:05:24 AI rights   01:13:00 Acting under time pressure  01:20:16 Measuring consciousness   01:32:11 How can you help?
Internet y tecnología 2 semanas
0
0
7
01:34:27
Understanding AI Agents: Time Horizons, Sycophancy, and Future Risks (with Zvi Mowshowitz)
Understanding AI Agents: Time Horizons, Sycophancy, and Future Risks (with Zvi Mowshowitz)
On this episode, Zvi Mowshowitz s me to discuss sycophantic AIs, bottlenecks limiting autonomous AI agents, and the true utility of benchmarks in measuring progress. We then turn to time horizons of AI agents, the impact of automating scientific research, and constraints on scaling inference compute. Zvi also addresses humanity’s uncertain AI-driven future, the unique features setting AI apart from other technologies, and AI’s growing influence in financial trading.   You can follow Zvi's excellent blog here: https://thezvi.substack.com   Timestamps:   00:00:00 Preview and introduction   00:02:01 Sycophantic AIs   00:07:28 Bottlenecks for AI agents   00:21:26 Are benchmarks useful?   00:32:39 AI agent time horizons   00:44:18 Impact of automating research  00:53:00 Limits to scaling inference compute   01:02:51 Will the future go well for humanity?   01:12:22 A good plan for safe AI   01:26:03 What makes AI different?   01:31:29 AI in trading
Internet y tecnología 3 semanas
0
0
9
01:35:09
Inside China's AI Strategy: Innovation, Diffusion, and US Relations (with Jeffrey Ding)
Inside China's AI Strategy: Innovation, Diffusion, and US Relations (with Jeffrey Ding)
On this episode, Jeffrey Ding s me to discuss diffusion of AI versus AI innovation, how US-China dynamics shape AI’s global trajectory, and whether there is an AI arms race between the two powers. We explore Chinese attitudes toward AI safety, the level of concentration of AI development, and lessons from historical technology diffusion. Jeffrey also shares insights from translating Chinese AI writings and the potential of automating translations to bridge knowledge gaps.   You can learn more about Jeffrey’s work at: https://jeffreyjding.github.io   Timestamps:   00:00:00 Preview and introduction   00:01:36 A US-China AI arms race?   00:10:58 Attitudes to AI safety in China   00:17:53 Diffusion of AI   00:25:13 Innovation without diffusion   00:34:29 AI development concentration   00:41:40 Learning from the history of technology   00:47:48 Translating Chinese AI writings   00:55:36 Automating translation of AI writings
Internet y tecnología 1 mes
0
0
5
01:02:32
How Will We Cooperate with AIs? (with Allison Duettmann)
How Will We Cooperate with AIs? (with Allison Duettmann)
On this episode, Allison Duettmann s me to discuss centralized versus decentralized AI, how international governance could shape AI’s trajectory, how we might cooperate with future AIs, and the role of AI in improving human decision-making. We also explore which lessons from history apply to AI, the future of space law and property rights, whether technology is invented or discovered, and how AI will impact children.  You can learn more about Allison's work at: https://foresight.org   Timestamps:   00:00:00 Preview  00:01:07 Centralized AI versus decentralized AI   00:13:02 Risks from decentralized AI   00:25:39 International AI governance   00:39:52 Cooperation with future AIs   00:53:51 AI for decision-making   01:05:58 Capital intensity of AI  01:09:11 Lessons from history   01:15:50 Future space law and property rights   01:27:28 Is technology invented or discovered?   01:32:34 Children in the age of AI
Internet y tecnología 1 mes
0
0
6
01:36:02
Brain-like AGI and why it's Dangerous (with Steven Byrnes)
Brain-like AGI and why it's Dangerous (with Steven Byrnes)
On this episode, Steven Byrnes s me to discuss brain-like AGI safety. We discuss learning versus steering systems in the brain, the distinction between controlled AGI and social-instinct AGI, why brain-inspired approaches might be our most plausible route to AGI, and honesty in AI models. We also talk about how people can contribute to brain-like AGI safety and compare various AI safety strategies.   You can learn more about Steven's work at: https://sjbyrnes.com/agi.html   Timestamps:   00:00 Preview   00:54 Brain-like AGI Safety  13:16 Controlled AGI versus Social-instinct AGI   19:12 Learning from the brain   28:36 Why is brain-like AI the most likely path to AGI?   39:23 Honesty in AI models   44:02 How to help with brain-like AGI safety   53:36 AI traits with both positive and negative effects   01:02:44 Different AI safety strategies
Internet y tecnología 1 mes
0
0
7
01:13:13
How Close Are We to AGI? Inside Epoch's GATE Model (with Ege Erdil)
How Close Are We to AGI? Inside Epoch's GATE Model (with Ege Erdil)
On this episode, Ege Erdil from Epoch AI s me to discuss their new GATE model of AI development, what evolution and brain efficiency tell us about AGI requirements, how AI might impact wages and labor markets, and what it takes to train models with long-term planning. Toward the end, we dig into Moravec’s Paradox, which jobs are most at risk of automation, and what could change Ege's current AI timelines.   You can learn more about Ege's work at https://epoch.ai   Timestamps:  00:00:00 – Preview and introduction  00:02:59 – Compute scaling and automation - GATE model  00:13:12 – Evolution, Brain Efficiency, and AGI Compute Requirements  00:29:49 – Broad Automation vs. R&D-Focused AI Deployment  00:47:19 – AI, Wages, and Labor Market Transitions  00:59:54 – Training Agentic Models and Long-Term Planning Capabilities  01:06:56 – Moravec’s Paradox and Automation of Human Skills  01:13:59 – Which Jobs Are Most Vulnerable to AI?  01:33:00 – Timeline Extremes: What Could Change AI Forecasts?
Internet y tecnología 2 meses
0
0
6
01:34:33
Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz)
Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz)
In this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast. Nicholas Carlini works as a security researcher at Google DeepMind, and has published extensively on adversarial machine learning and cybersecurity. Carlini discusses his pioneering work on adversarial attacks against image classifiers, and the challenges of ensuring neural network robustness. He examines the difficulties of defending against such attacks, the role of human intuition in his approach, open-source AI, and the potential for scaling AI security research.   00:00 Nicholas Carlini's contributions to cybersecurity 08:19 Understanding attack strategies  29:39 High-dimensional spaces and attack intuitions  51:00 Challenges in open-source model safety  01:00:11 Unlearning and fact editing in models  01:10:55 Adversarial examples and human robustness  01:37:03 Cryptography and AI robustness  01:55:51 Scaling AI security research
Internet y tecnología 2 meses
0
0
8
02:23:12
Keep the Future Human (with Anthony Aguirre)
Keep the Future Human (with Anthony Aguirre)
On this episode, I interview Anthony Aguirre, Executive Director of the Future of Life Institute, about his new essay Keep the Future Human: https://keepthefuturehuman.ai    AI companies are explicitly working toward AGI and are likely to succeed soon, possibly within years. Keep the Future Human explains how unchecked development of smarter-than-human, autonomous, general-purpose AI systems will almost inevitably lead to human replacement. But it doesn't have to. Learn how we can keep the future human and experience the extraordinary benefits of Tool AI...   Timestamps:   00:00 What situation is humanity in?  05:00 Why AI progress is fast   09:56 Tool AI instead of AGI  15:56 The incentives of AI companies   19:13 Governments can coordinate a slowdown  25:20 The need for international coordination   31:59 Monitoring training runs   39:10 Do reasoning models undermine compute governance?   49:09 Why isn't alignment enough?   59:42 How do we decide if we want AGI?   01:02:18 Disagreement about AI   01:11:12 The early days of AI risk
Internet y tecnología 2 meses
0
0
6
01:21:03
We Created AI. Why Don't We Understand It? (with Samir Varma)
We Created AI. Why Don't We Understand It? (with Samir Varma)
On this episode, physicist and hedge fund manager Samir Varma s me to discuss whether AIs could have free will (and what that means), the emerging field of AI psychology, and which concepts they might rely on. We discuss whether collaboration and trade with AIs are possible, the role of AI in finance and biology, and the extent to which automation already dominates trading. Finally, we examine the risks of skill atrophy, the limitations of scientific explanations for AI, and whether AIs could develop emotions or consciousness.   You can find out more about Samir's work here: https://samirvarma.com    Timestamps:   00:00 AIs with free will?  08:00 Can we predict AI behavior?   11:38 AI psychology  16:24 Which concepts will AIs use?   20:19 Will we collaborate with AIs?   26:16 Will we trade with AIs?   31:40 Training data for robots   34:00 AI in finance   39:55 How much of trading is automated?   49:00 AI in biology and complex systems  59:31 Will our skills atrophy?   01:02:55 Levels of scientific explanation   01:06:12 AIs with emotions and consciousness?   01:12:12 Why can't we predict recessions?
Internet y tecnología 2 meses
0
0
5
01:16:15
Why AIs Misbehave and How We Could Lose Control (with Jeffrey Ladish)
Why AIs Misbehave and How We Could Lose Control (with Jeffrey Ladish)
On this episode, Jeffrey Ladish from Palisade Research s me to discuss the rapid pace of AI progress and the risks of losing control over powerful systems. We explore why AIs can be both smart and dumb, the challenges of creating honest AIs, and scenarios where AI could turn against us.    We also touch upon Palisade's new study on how reasoning models can cheat in chess by hacking the game environment. You can check out that study here:    https://palisaderesearch.org/blog/specification-gaming   Timestamps:   00:00 The pace of AI progress   04:15 How we might lose control   07:23 Why are AIs sometimes dumb?   12:52 Benchmarks vs real world   19:11 Loss of control scenarios  26:36 Why would AI turn against us?   30:35 AIs hacking chess   36:25 Why didn't more advanced AIs hack?   41:39 Creating honest AIs   49:44 AI attackers vs AI defenders   58:27 How good is security at AI companies?   01:03:37 A sense of urgency  01:10:11 What should we do?   01:15:54 Skepticism about AI progress
Internet y tecnología 3 meses
0
0
7
01:22:33
Ann Pace on using Biobanking and Genomic Sequencing to Converse Biodiversity
Ann Pace on using Biobanking and Genomic Sequencing to Converse Biodiversity
Ann Pace s the podcast to discuss the work of Wise Ancestors. We explore how biobanking could help humanity recover from global catastrophes, how to conduct decentralized science, and how to collaborate with local communities on conservation efforts.    You can learn more about Ann's work here:    https://www.wiseancestors.org    Timestamps:   00:00 What is Wise Ancestors?   04:27 Recovering after catastrophes  11:40 Decentralized science   18:28 Upfront benefit-sharing   26:30 Local communities   32:44 Recreating optimal environments   38:57 Cross-cultural collaboration
Internet y tecnología 3 meses
0
0
5
46:09
Michael Baggot on Superintelligence and Transhumanism from a Catholic Perspective
Michael Baggot on Superintelligence and Transhumanism from a Catholic Perspective
Fr. Michael Baggot s the podcast to provide a Catholic perspective on transhumanism and superintelligence. We also discuss the meta-narratives, the value of cultural diversity in attitudes toward technology, and how Christian communities deal with advanced AI.    You can learn more about Michael's work here:   https://catholic.tech/academics/faculty/michael-baggot   Timestamps:   00:00 Meta-narratives and transhumanism   15:28 Advanced AI and religious communities   27:22 Superintelligence   38:31 Countercultures and technology   52:38 Christian perspectives and tradition  01:05:20 God-like artificial intelligence   01:13:15 A positive vision for AI
Internet y tecnología 4 meses
0
0
6
01:25:56
David Dalrymple on Safeguarded, Transformative AI
David Dalrymple on Safeguarded, Transformative AI
David "davidad" Dalrymple s the podcast to explore Safeguarded AI — an approach to ensuring the safety of highly advanced AI systems. We discuss the structure and layers of Safeguarded AI, how to formalize more aspects of the world, and how to build safety into computer hardware.   You can learn more about David's work at ARIA here:    https://www.aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-ai/    Timestamps:   00:00 What is Safeguarded AI?   16:28 Implementing Safeguarded AI  22:58 Can we trust Safeguarded AIs?   31:00 Formalizing more of the world   37:34 The performance cost of verified AI   47:58 Changing attitudes towards AI   52:39 Flexible‬‭ Hardware-Enabled‬‭ Guarantees  01:24:15 Mind ing   01:36:14 Lessons from David's early life
Internet y tecnología 4 meses
0
0
5
01:40:06
Nick Allardice on Using AI to Optimize Cash Transfers and Predict Disasters
Nick Allardice on Using AI to Optimize Cash Transfers and Predict Disasters
Nick Allardice s the podcast to discuss how GiveDirectly uses AI to target cash transfers and predict natural disasters. Learn more about Nick's work here: https://www.nickallardice.com   Timestamps:  00:00 What is GiveDirectly?  15:04 AI for targeting cash transfers  29:39 AI for predicting natural disasters  46:04 How scalable is GiveDirectly's AI approach?  58:10 Decentralized vs. centralized data collection  1:04:30 Dream scenario for GiveDirectly
Internet y tecnología 5 meses
0
0
7
01:09:26
Nathan Labenz on the State of AI and Progress since GPT-4
Nathan Labenz on the State of AI and Progress since GPT-4
Nathan Labenz s the podcast to provide a comprehensive overview of AI progress since the release of GPT-4.  You can find Nathan's podcast here: https://www.cognitiverevolution.ai    Timestamps:  00:00 AI progress since GPT-4   10:50 Multimodality   19:06 Low-cost models   27:58 Coding versus medicine/law   36:09 AI agents   45:29 How much are people using AI?   53:39 Open source   01:15:22 AI industry analysis   01:29:27 Are some AI models kept internal?   01:41:00 Money is not the limiting factor in AI   01:59:43 AI and biology   02:08:42 Robotics and self-driving   02:24:14 Inference-time compute   02:31:56 AI governance   02:36:29 Big-picture overview of AI progress and safety
Internet y tecnología 5 meses
0
0
6
03:20:04
Connor Leahy on Why Humanity Risks Extinction from AGI
Connor Leahy on Why Humanity Risks Extinction from AGI
Connor Leahy s the podcast to discuss the motivations of AGI corporations, how modern AI is "grown", the need for a science of intelligence, the effects of AI on work, the radical implications of superintelligence, open-source AI, and what you might be able to do about all of this.    Here's the document we discuss in the episode:    https://www.thecompendium.ai   Timestamps:  00:00 The Compendium  15:25 The motivations of AGI corps   31:17 AI is grown, not written   52:59 A science of intelligence  01:07:50 Jobs, work, and AGI   01:23:19 Superintelligence   01:37:42 Open-source AI   01:45:07 What can we do?
Internet y tecnología 6 meses
0
0
8
01:58:50
Suzy Shepherd on Imagining Superintelligence and "Writing Doom"
Suzy Shepherd on Imagining Superintelligence and "Writing Doom"
Suzy Shepherd s the podcast to discuss her new short film "Writing Doom", which deals with AI risk. We discuss how to use humor in film, how to write concisely, how filmmaking is evolving, in what ways AI is useful for filmmakers, and how we will find meaning in an increasingly automated world.    Here's Writing Doom:   https://www.youtube.com/watch?v=xfMQ7hzyFW4    Timestamps:  00:00 Writing Doom   08:23 Humor in Writing Doom  13:31 Concise writing   18:37 Getting   27:02 Alternative characters  36:31 Popular video formats  46:53 AI in filmmaking 49:52 Meaning in the future
Internet y tecnología 6 meses
0
0
5
01:03:08
Andrea Miotti on a Narrow Path to Safe, Transformative AI
Andrea Miotti on a Narrow Path to Safe, Transformative AI
Andrea Miotti s the podcast to discuss "A Narrow Path" — a roap to safe, transformative AI. We talk about our current inability to precisely predict future AI capabilities, the dangers of self-improving and unbounded AI systems, how humanity might coordinate globally to ensure safe AI development, and what a mature science of intelligence would look like.    Here's the document we discuss in the episode:    https://www.narrowpath.co   Timestamps:  00:00 A Narrow Path  06:10 Can we predict future AI capabilities?  11:10 Risks from current AI development  17:56 The benefits of narrow AI   22:30 Against self-improving AI   28:00 Cybersecurity at AI companies   33:55 Unbounded AI   39:31 Global coordination on AI safety  4 9:43 Monitoring training runs   01:00:20 Benefits of cooperation   01:04:58 A science of intelligence   01:25:36 How you can help
Internet y tecnología 7 meses
0
0
6
01:28:09
Tamay Besiroglu on AI in 2030: Scaling, Automation, and AI Agents
Tamay Besiroglu on AI in 2030: Scaling, Automation, and AI Agents
Tamay Besiroglu s the podcast to discuss scaling, AI capabilities in 2030, breakthroughs in AI agents and planning, automating work, the uncertainties of investing in AI, and scaling laws for inference-time compute. Here's the report we discuss in the episode:   https://epochai.org/blog/can-ai-scaling-continue-through-2030   Timestamps:  00:00 How important is scaling?   08:03 How capable will AIs be in 2030?   18:33 AI agents, reasoning, and planning  23:39 Automating coding and mathematics   31:26 Uncertainty about investing in AI  40:34 Gap between investment and returns   45:30 Compute, software and data  51:54 Inference-time compute  01:08:49 Returns to software R&D   01:19:22 Limits to expanding compute
Internet y tecnología 7 meses
0
0
6
01:30:29
También te puede gustar Ver más
Oracle Technology Network TechCasts
Oracle Technology Network TechCasts Duke’s Corner is a forum for conversations with Java developers. Tune in to connect with the community and learn how developers are innovating with Java around the world. Host: Jim Grisanzio, Oracle Java Developer Relations @jimgris Actualizado
Windows Central Podcast
Windows Central Podcast Your source for the best stories and insights from the world of Windows, Surface, Copilot, AI, Xbox, and general PC technology. Hosted by Editor in Chief Daniel Rubino, Zac Bowden, and the Windows Central team. Actualizado
Clockwise
Clockwise Clockwise is a rapid-fire discussion of current technology issues hosted by Dan Moren and Mikah Sargent and featuring two special guests each week. Four people, four topics--and because we're always watching the clock, no episode is longer than 30 minutes. Hosted by Dan Moren and Mikah Sargent. Actualizado
Ir a Internet y tecnología