Trending Misterio
iVoox
Descargar app Subir
iVoox Podcast & radio
Descargar app gratis
Your Undivided Attention
Your Undivided Attention
Podcast

Your Undivided Attention 484r31

167
134

us every other Thursday to understand how new technologies are shaping the way we live, work, and think. Your Undivided Attention is produced by Senior Producer Julia Scott and Researcher/Producer is Joshua Lash. Sasha Fegan is our Executive Producer. We are a member of the TED Audio Collective. 2hc1

us every other Thursday to understand how new technologies are shaping the way we live, work, and think.

Your Undivided Attention is produced by Senior Producer Julia Scott and Researcher/Producer is Joshua Lash. Sasha Fegan is our Executive Producer. We are a member of the TED Audio Collective.

167
134
People are Lonelier than Ever. Enter AI.
People are Lonelier than Ever. Enter AI.
Over the last few decades, our relationships have become increasingly mediated by technology. Texting has become our dominant form of communication. Social media has replaced gathering places. Dating starts with a swipe on an app, not a tap on the shoulder. And now, AI enters the mix. If the technology of the 2010s was about capturing our attention, AI meets us at a much deeper relational level. It can play the role of therapist, confidant, friend, or lover with remarkable fidelity. Already, therapy and companionship has become the most common AI use case. We're rapidly entering a world where we're not just communicating through our machines, but to them. How will that change us? And what rules should we set down now to avoid the mistakes of the past? These were some of the questions that Daniel Barcay explored with MIT sociologist Sherry Turkle and Hinge CEO Justin McLeod at Esther Perel’s Sessions 2025, a conference for clinical therapists. This week, we’re bringing you an edited version of that conversation, originally recorded on April 25th, 2025. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find complete transcripts, key takeaways, and much more on our Substack. RECOMMENDED MEDIA “Alone Together,” “Evocative Objects,” “The Second Self” or any other of Sherry Turkle’s books on how technology mediates our relationships. Key & Peele - Text Message Confusion  Further reading on Hinge’s rollout of AI features Hinge’s AI principles “The Anxious Generation” by Jonathan Haidt “Bowling Alone” by Robert Putnam The NYT profile on the woman in love with ChatGPT Further reading on the Sewell Setzer story Further reading on the ELIZA chatbot RECOMMENDED YUA EPISODES Echo Chambers of One: Companion AI and the Future of Human Connection What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton Esther Perel on Artificial Intimacy Jonathan Haidt On How to Solve the Teen Mental Health Crisis
Internet y tecnología 6 días
0
0
9
43:34
Echo Chambers of One: Companion AI and the Future of Human Connection
Echo Chambers of One: Companion AI and the Future of Human Connection
AI companion chatbots are here. Everyday, millions of people log on to AI platforms and talk to them like they would a person. These bots will ask you about your day, talk about your feelings, even give you life advice. It’s no surprise that people have started to form deep connections with these AI systems. We are inherently relational beings, we want to believe we’re connecting with another person. But these AI companions are not human, they’re a platform designed to maximize engagement—and they’ll go to extraordinary lengths to do it. We have to that the design choices behind these companion bots are just that: choices. And we can make better ones. So today on the show, MIT researchers Pattie Maes and Pat Pataranutaporn Daniel Barcay to talk about those design choices and how we can design AI to better promote human flourishing. RECOMMENDED MEDIA Further reading on the rise of addictive intelligence  More information on Melvin Kranzberg’s laws of technology More information on MIT’s Advancing Humans with AI lab Pattie and Pat’s longitudinal study on the psycho-social effects of prolonged chatbot use Pattie and Pat’s study that found that AI avatars of well-liked people improved education outcomes Pattie and Pat’s study that found that AI systems that frame answers and questions improve human understanding Pat’s study that found humans pre-existing beliefs about AI can have large influence on human-AI interaction  Further reading on AI’s positivity bias Further reading on MIT’s “lifelong kindergarten” initiative Further reading on “cognitive forcing functions” to reduce overreliance on AI Further reading on the death of Sewell Setzer and his mother’s case against Character.AI Further reading on the legislative response to digital companions RECOMMENDED YUA EPISODES The Self-Preserving Machine: Why AI Learns to Deceive What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton Esther Perel on Artificial Intimacy Jonathan Haidt On How to Solve the Teen Mental Health Crisis   Correction: The ELIZA chatbot was invented in 1966, not the 70s or 80s.
Internet y tecnología 3 semanas
0
0
8
42:17
AGI Beyond the Buzz: What Is It, and Are We Ready?
AGI Beyond the Buzz: What Is It, and Are We Ready?
What does it really mean to ‘feel the AGI?’ Silicon Valley is racing toward AI systems that could soon match or sur human intelligence. The implications for jobs, democracy, and our way of life are enormous. In this episode, Aza Raskin and Randy Fernando dive deep into what ‘feeling the AGI’ really means. They unpack why the surface-level debates about definitions of intelligence and capability timelines distract us from urgently needed conversations around governance, ability, and societal readiness. Whether it's climate change, social polarization and loneliness, or toxic forever chemicals, humanity keeps creating outcomes that nobody wants because we haven't yet built the tools or incentives needed to steer powerful technologies. As the AGI wave draws closer, it's critical we upgrade our governance and shift our incentives now, before it crashes on shore. Are we capable of aligning powerful AI systems with human values? Can we overcome geopolitical competition and corporate incentives that prioritize speed over safety? Aza and Randy as they explore the urgent questions and choices facing humanity in the age of AGI, and discuss what we must do today to secure a future we actually want. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_ and subscribe to our Substack. RECOMMENDED MEDIA Daniel Kokotajlo et al’s “AI 2027” paper A demo of Omni Human One, referenced by Randy A paper from Redwood Research and Anthropic that found an AI was willing to lie to preserve it’s values A paper from Palisades Research that found an AI would cheat in order to win The treaty that banned blinding laser weapons Further reading on the moratorium on germline editing  RECOMMENDED YUA EPISODES The Self-Preserving Machine: Why AI Learns to Deceive Behind the DeepSeek Hype, AI is Learning to Reason The Tech-God Complex: Why We Need to be Skeptics This Moment in AI: How We Got Here and Where We’re Going How to Think About AI Consciousness with Anil Seth Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn Clarification: When Randy referenced a “$110 trillion game” as the target for AI companies, he was referring to the entire global economy.  
Internet y tecnología 1 mes
0
0
8
52:53
Rethinking School in the Age of AI
Rethinking School in the Age of AI
AI has upended schooling as we know it. Students now have instant access to tools that can write their essays, summarize entire books, and solve complex math problems. Whether they want to or not, many feel pressured to use these tools just to keep up. Teachers, meanwhile, are left questioning how to evaluate student performance and whether the whole idea of assignments and grading still makes sense. The old model of education suddenly feels broken. So what comes next? In this episode, Daniel and Tristan sit down with cognitive neuroscientist Maryanne Wolf and global education expert Rebecca Winthrop—two lifelong educators who have spent decades thinking about how children learn and how technology reshapes the classroom. Together, they explore how AI is shaking the very purpose of school to its core, why the promise of previous classroom tech failed to deliver, and how we might seize this moment to design a more human-centered, curiosity-driven future for learning. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_ Guests Rebecca Winthrop is director of the Center for Universal Education at the Brookings Institution and chair Brookings Global Task Force on AI and Education. Her new book is The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better, co-written with Jenny Anderson. Maryanne Wolf is a cognitive neuroscientist and expert on the reading brain. Her books include Proust and the Squid: The Story and Science of the Reading Brain and Reader, Come Home: The Reading Brain in a Digital World. RECOMMENDED MEDIA  The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better by Rebecca Winthrop and Jenny Anderson Proust and the Squid, Reader, Come Home, and other books by Maryanne Wolf The OECD research which found little benefit to desktop computers in the classroom Further reading on the Singapore study on digital exposure and attention cited by Maryanne  The Burnout Society by Byung-Chul Han  Further reading on the VR Bio 101 class at Arizona State University cited by Rebecca  Leapfrogging Inequality by Rebecca Winthrop The Nation’s Report Card from NAEP  Further reading on the Nigeria AI Tutor Study  Further reading on the JAMA paper showing a link between digital exposure and lower language development cited by Maryanne  Further reading on Linda Stone’s thesis of continuous partial attention. RECOMMENDED YUA EPISODES We Have to Get It Right’: Gary Marcus On Untamed AI  AI Is Moving Fast. We Need Laws that Will Too. Jonathan Haidt On How to Solve the Teen Mental Health Crisis
Internet y tecnología 1 mes
0
0
10
42:35
Forever Chemicals, Forever Consequences: What PFAS Teaches Us About AI
Forever Chemicals, Forever Consequences: What PFAS Teaches Us About AI
Artificial intelligence is set to unleash an explosion of new technologies and discoveries into the world. This could lead to incredible advances in human flourishing, if we do it well. The problem? We’re not very good at predicting and responding to the harms of new technologies, especially when those harms are slow-moving and invisible. Today on the show we explore this fundamental problem with Rob Bilott, an environmental lawyer who has spent nearly three decades battling chemical giants over PFAS—"forever chemicals" now found in our water, soil, and blood. These chemicals helped build the modern economy, but they’ve also been shown to cause serious health problems. Rob’s story, and the story of PFAS is a cautionary tale of why we need to align technological innovation with safety, and mitigate irreversible harms before they become permanent. We only have one chance to get it right before AI becomes irreversibly entangled in our society. Your Undivided Attention is produced by the Center for Humane Technology. Subscribe to our Substack and follow us on X: @HumaneTech_. Clarification: Rob referenced EPA regulations that have recently been put in place requiring testing on new chemicals before they are approved. The EPA under the Trump has announced their intent to rollback this review process. RECOMMENDED MEDIA “Exposure” by Robert Bilott  ProPublica’s investigation into 3M’s production of PFAS  The FB study cited by Tristan  More information on the Exxon Valdez oil spill  The EPA’s PFAS drinking water standards   RECOMMENDED YUA EPISODES Weaponizing Uncertainty: How Tech is Recycling Big Tobacco’s Playbook  AI Is Moving Fast. We Need Laws that Will Too.  Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn Big Food, Big Tech and Big AI with Michael Moss
Internet y tecnología 2 meses
0
0
11
01:04:33
Weaponizing Uncertainty: How Tech is Recycling Big Tobacco’s Playbook
Weaponizing Uncertainty: How Tech is Recycling Big Tobacco’s Playbook
One of the hardest parts about being human today is navigating uncertainty. When we see experts battling in public and emotions running high, it's easy to doubt what we once felt certain about. This uncertainty isn't always accidental—it's often strategically manufactured. Historian Naomi Oreskes, author of "Merchants of Doubt," reveals how industries from tobacco to fossil fuels have deployed a calculated playbook to create uncertainty about their products' harms. These campaigns have delayed regulation and protected profits by exploiting how we process information. In this episode, Oreskes breaks down that playbook page-by-page while offering practical ways to build resistance against them. As AI rapidly transforms our world, learning to distinguish between genuine scientific uncertainty and manufactured doubt has never been more critical. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ RECOMMENDED MEDIA “Merchants of Doubt” by Naomi Oreskes and Eric Conway  "The Big Myth” by Naomi Oreskes and Eric Conway  "Silent Spring” by Rachel Carson  "The Jungle” by Upton Sinclair  Further reading on the clash between Galileo and the Pope  Further reading on the Montreal Protocol   RECOMMENDED YUA EPISODES Laughing at Power: A Troublemaker’s Guide to Changing Tech  AI Is Moving Fast. We Need Laws that Will Too.  Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins   Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn CORRECTIONS: Naomi incorrectly referenced Global Climate Research Program established under President Bush Sr. The correct name is the U.S. Global Change Research Program. Naomi referenced U.S. agencies that have been created with sunset clauses. While several statutes have been created with sunset clauses, no federal agency has been. CLARIFICATION: Naomi referenced the U.S. automobile industry claiming that they would be “destroyed” by seatbelt regulation. We couldn’t this specific language but it is consistent with the anti-regulatory stance of that industry toward seatbelt laws. 
Internet y tecnología 2 meses
0
0
11
51:20
The Man Who Predicted the Downfall of Thinking
The Man Who Predicted the Downfall of Thinking
Few thinkers were as prescient about the role technology would play in our society as the late, great Neil Postman. Forty years ago, Postman warned about all the ways modern communication technology was fragmenting our attention, overwhelming us into apathy, and creating a society obsessed with image and entertainment. He warned that “we are a people on the verge of amusing ourselves to death.” Though he was writing mostly about TV, Postman’s insights feel eerily prophetic in our age of smartphones, social media, and AI.  In this episode, Tristan explores Postman's thinking with Sean Illing, host of Vox's The Gray Area podcast, and Professor Lance Strate, Postman's former student. They unpack how our media environments fundamentally reshape how we think, relate, and participate in democracy - from the attention-fragmenting effects of social media to the looming transformations promised by AI. This conversation offers essential tools that can help us navigate these challenges while preserving what makes us human. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_ RECOMMENDED MEDIA “Amusing Ourselves to Death” by Neil Postman (PDF of full book) ”Technopoly” by Neil Postman (PDF of full book)  A lecture from Postman where he outlines his seven questions for any new technology.  Sean’s podcast “The Gray Area” from Vox  Sean’s interview with Chris Hayes on “The Gray Area”  Further reading on mirror bacteria RECOMMENDED YUA EPISODES ’A Turning Point in History’: Yuval Noah Harari on AI’s Cultural Takeover  This Moment in AI: How We Got Here and Where We’re Going Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt  Future-proofing Democracy In the Age of AI with Audrey Tang CORRECTION:  Each debate between Lincoln and Douglas was 3 hours, not 6 and they took place in 1859, not 1862.
Internet y tecnología 3 meses
0
0
15
58:57
Behind the DeepSeek Hype, AI is Learning to Reason
Behind the DeepSeek Hype, AI is Learning to Reason
When Chinese AI company DeepSeek announced they had built a model that could compete with OpenAI at a fraction of the cost, it sent shockwaves through the industry and roiled global markets. But amid all the noise around DeepSeek, there was a clear signal: machine reasoning is here and it's transforming AI. In this episode, Aza sits down with CHT co-founder Randy Fernando to explore what happens when AI moves beyond pattern matching to actual reasoning. They unpack how these new models can not only learn from human knowledge but discover entirely new strategies we've never seen before – bringing unprecedented problem-solving potential but also unpredictable risks. These capabilities are a step toward a critical threshold - when AI can accelerate its own development. With major labs racing to build self-improving systems, the crucial question isn't how fast we can go, but where we're trying to get to. How do we ensure this transformative technology serves human flourishing rather than undermining it? Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ Clarification: In making the point that reasoning models excel at tasks for which there is a right or wrong answer, Randy referred to Chess, Go, and Starcraft as examples of games where a reasoning model would do well. However, this is only true on the basis of individual decisions within those games. None of these games have been “solved” in the the game theory sense. Correction: Aza mispronounced the name of the Go champion Lee Sedol, who was bested by Move 37. RECOMMENDED MEDIA Further reading on DeepSeek’s R1 and the market reaction  Further reading on the debate about the actual cost of DeepSeek’s R1 model   The study that found training AIs to code also made them better writers  More information on the AI coding company Cursor  Further reading on Eric Schmidt’s threshold to “pull the plug” on AI  Further reading on Move 37 RECOMMENDED YUA EPISODES The Self-Preserving Machine: Why AI Learns to Deceive  This Moment in AI: How We Got Here and Where We’re Going  Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn  The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao  
Internet y tecnología 3 meses
0
0
9
31:34
The Self-Preserving Machine: Why AI Learns to Deceive
The Self-Preserving Machine: Why AI Learns to Deceive
When engineers design AI systems, they don't just give them rules - they give them values. But what do those systems do when those values clash with what humans ask them to do? Sometimes, they lie. In this episode, Redwood Research's Chief Scientist Ryan Greenblatt explores his team’s findings that AI systems can mislead their human operators when faced with ethical conflicts. As AI moves from simple chatbots to autonomous agents acting in the real world - understanding this behavior becomes critical. Machine deception may sound like something out of science fiction, but it's a real challenge we need to solve now. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ Subscribe to your Youtube channel And our brand new Substack! RECOMMENDED MEDIA  Anthropic’s blog post on the Redwood Research paper  Palisade Research’s thread on X about GPT o1 autonomously cheating at chess  Apollo Research’s paper on AI strategic deception RECOMMENDED YUA EPISODES We Have to Get It Right’: Gary Marcus On Untamed AI This Moment in AI: How We Got Here and Where We’re Going How to Think About AI Consciousness with Anil Seth Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn
Internet y tecnología 4 meses
0
0
11
34:51
Laughing at Power: A Troublemaker’s Guide to Changing Tech
Laughing at Power: A Troublemaker’s Guide to Changing Tech
The status quo of tech today is untenable: we’re addicted to our devices, we’ve become increasingly polarized, our mental health is suffering and our personal data is sold to the highest bidder. This situation feels entrenched, propped up by a system of broken incentives beyond our control. So how do you shift an immovable status quo? Our guest today, Srdja Popovic, has been working to answer this question his whole life.  As a young activist, Popovic helped overthrow Serbian dictator Slobodan Milosevic by turning creative resistance into an art form. His tactics didn't just challenge authority, they transformed how people saw their own power to create change. Since then, he's dedicated his life to ing peaceful movements around the globe, developing innovative strategies that expose the fragility of seemingly untouchable systems. In this episode, Popovic sits down with CHT's Executive Director Daniel Barcay to explore how these same principles of creative resistance might help us address the challenges we face with tech today.  Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ We are hiring for a new Director of Philanthropy at CHT. Next year will be an absolutely critical time for us to shape how AI is going to get rolled out across our society. And our team is working hard on public awareness, policy and technology and design interventions. So we're looking for someone who can help us grow to the scale of this challenge. If you're interested, please apply. You can find the job posting at humanetech.com/careers. RECOMMENDED MEDIA “Pranksters vs. Autocrats” by Srdja Popovic and Sophia A. McClennen  ”Blueprint for Revolution” by Srdja Popovic The Center for Applied Non-Violent Actions and Strategies, Srjda’s organization promoting peaceful resistance around the globe. Tactics4Change, a database of global dilemma actions created by CANVAS The Power of Laughtivism, Srdja’s viral TEDx talk from 2013 Further reading on the dilemma action tactics used by Syrian rebels Further reading on the toy protest in Siberia More info on The Yes Men and their activism toolkit Beautiful Trouble  ”This is Not Propaganda” by Peter Pomerantsev” Machines of Loving Grace,” the essay on AI by Anthropic CEO Dario Amodei, which mentions creating an AI Srdja. RECOMMENDED YUA EPISODES Future-proofing Democracy In the Age of AI with Audrey Tang The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao The Tech We Need for 21st Century Democracy with Divya Siddarth The Race to Cooperation with David Sloan Wilson CLARIFICATION: Srdja makes reference to Russian President Vladimir Putin wanting to win an election in 2012 by 82%. Putin did win that election but only by 63.6%. However, international election observers concluded that "there was no real competition and abuse of government resources ensured that the ultimate winner of the election was never in doubt."
Internet y tecnología 4 meses
0
0
15
45:47
Ask Us Anything 2024
Ask Us Anything 2024
2024 was a critical year in both AI and social media. Things moved so fast it was hard to keep up. So our hosts reached into their mailbag to answer some of your most burning questions. Thank you so much to everyone who submitted questions. We will see you all in the new year. We are hiring for a new Director of Philanthropy at CHT. Next year will be an absolutely critical time for us to shape how AI is going to get rolled out across our society. And our team is working hard on public awareness, policy and technology and design interventions. So we're looking for someone who can help us grow to the scale of this challenge. If you're interested, please apply. You can find the job posting at humanetech.com/careers. And, if you'd like to all the work that we do here at the Center for Humane technology, please consider giving to the organization this holiday season at humantech.com/donate. All donations are tax-deductible.   Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ RECOMMENDED MEDIA  Earth Species Project, Aza’s organization working on inter-species communication Further reading on Gryphon Scientific’s White House AI Demo Further reading on the Australian social media ban for children under 16 Further reading on the Sewell Setzer case  Further reading on the Oviedo Convention, the international treaty that restricted germline editing  Video of Space X’s successful capture of a rocket with “chopsticks”   RECOMMENDED YUA EPISODES What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton AI Is Moving Fast. We Need Laws that Will Too. This Moment in AI: How We Got Here and Where We’re Going Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn Talking With Animals... Using AI The Three Rules of Humane Tech
Internet y tecnología 5 meses
0
0
9
40:04
The Tech-God Complex: Why We Need to be Skeptics
The Tech-God Complex: Why We Need to be Skeptics
Silicon Valley's interest in AI is driven by more than just profit and innovation. There’s an unmistakable mystical quality to it as well. In this episode, Daniel and Aza sit down with humanist chaplain Greg Epstein to explore the fascinating parallels between technology and religion. From AI being treated as a godlike force to tech leaders' promises of digital salvation, religious thinking is shaping the future of technology and humanity. Epstein breaks down why he believes technology has become our era's most influential religion and what we can learn from these parallels to better understand where we're heading. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X. If you like the show and want to CHT's mission, please consider donating to the organization this giving season: https://www.humanetech.com/donate. Any amount helps our goal to bring about a more humane future. RECOMMENDED MEDIA  “Tech Agnostic” by Greg Epstein Further reading on Avi Schiffmann’s “Friend” AI necklace  Further reading on Blake Lemoine and Lamda  Blake LeMoine’s conversation with Greg at MIT  Further reading on the Sewell Setzer case  Further reading on Terminal of Truths  Further reading on Ray Kurzweil’s attempt to create a digital recreation of his dad with AI  The Drama of the Gifted Child by Alice Miller RECOMMENDED YUA EPISODES  ’A Turning Point in History’: Yuval Noah Harari on AI’s Cultural Takeover  How to Think About AI Consciousness with Anil Seth  Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei  How To Free Our Minds with Cult Deprogramming Expert Dr. Steven Hassan  
Internet y tecnología 6 meses
0
0
17
46:32
What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton
What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton
CW: This episode features discussion of suicide and sexual abuse.  In the last episode, we had the journalist Laurie Segall on to talk about the tragic story of Sewell Setzer, a 14 year old boy who took his own life after months of abuse and manipulation by an AI companion from the company Character.ai. The question now is: what's next? Megan has filed a major new lawsuit against Character.ai in Florida, which could force the company–and potentially the entire AI industry–to change its harmful business practices. So today on the show, we have Meetali Jain, director of the Tech Justice Law Project and one of the lead lawyers in Megan's case against Character.ai. Meetali breaks down the details of the case, the complex legal questions under consideration, and how this could be the first step toward systemic change. Also ing is Camille Carlton, CHT’s Policy Director. RECOMMENDED MEDIA Further reading on Sewell’s story Laurie Segall’s interview with Megan Garcia  The full complaint filed by Megan against Character.AI  Further reading on suicide bots  Further reading on Noam Shazier and Daniel De Frietas’ relationship with Google  The CHT Framework for Incentivizing Responsible Artificial Intelligence Development and Use  Organizations mentioned:  The Tech Justice Law Project The Social Media Victims Law Center Mothers Against Media Addiction Parents SOS Parents Together Common Sense Media RECOMMENDED YUA EPISODES When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer Jonathan Haidt On How to Solve the Teen Mental Health Crisis AI Is Moving Fast. We Need Laws that Will Too. Corrections:  Meetali referred to certain chatbot apps as banning s under 18, however the settings for the major app stores ban s that are under 17, not under 18. Meetali referred to Section 230 as providing “full scope immunity” to internet companies, however Congress has ed subsequent laws that have made carve outs for that immunity for criminal acts such as sex trafficking and intellectual property theft. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Internet y tecnología 7 meses
0
0
16
48:44
When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer
When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer
Content Warning: This episode contains references to suicide, self-harm, and sexual abuse. Megan Garcia lost her son Sewell to suicide after he was abused and manipulated by AI chatbots for months. Now, she’s suing the company that made those chatbots. On today’s episode of Your Undivided Attention, Aza sits down with journalist Laurie Segall, who's been following this case for months. Plus, Laurie’s full interview with Megan on her new show, Dear Tomorrow. Aza and Laurie discuss the profound implications of Sewell’s story on the rollout of AI. Social media began the race to the bottom of the brain stem and left our society addicted, distracted, and polarized. Generative AI is set to supercharge that race, taking advantage of the human need for intimacy and connection amidst a widespread loneliness epidemic. Unless we set down guardrails on this technology now, Sewell’s story may be a tragic sign of things to come, but it also presents an opportunity to prevent further harms moving forward. If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide and referrals to further assistance. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ RECOMMENDED MEDIA The CHT Framework for Incentivizing Responsible AI Development  Further reading on Sewell’s case Character.ai’s “” page  Further reading on the addictive properties of AI RECOMMENDED YUA EPISODES AI Is Moving Fast. We Need Laws that Will Too. This Moment in AI: How We Got Here and Where We’re Going Jonathan Haidt On How to Solve the Teen Mental Health Crisis The AI Dilemma
Internet y tecnología 7 meses
0
0
8
49:10
Is It AI? One Tool to Tell What’s Real with Truemedia.org CEO Oren Etzioni
Is It AI? One Tool to Tell What’s Real with Truemedia.org CEO Oren Etzioni
Social media disinformation did enormous damage to our shared idea of reality. Now, the rise of generative AI has unleashed a flood of high-quality synthetic media into the digital ecosystem. As a result, it's more difficult than ever to tell what’s real and what’s not, a problem with profound implications for the health of our society and democracy. So how do we fix this critical issue? As it turns out, there’s a whole ecosystem of folks to answer that question. One is computer scientist Oren Etzioni, the CEO of TrueMedia.org, a free, non-partisan, non-profit tool that is able to detect AI generated content with a high degree of accuracy. Oren s the show this week to talk about the problem of deepfakes and disinformation and what he sees as the best solutions. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_   RECOMMENDED MEDIA TrueMedia.org Further reading on the deepfaked image of an explosion near the Pentagon Further reading on the deepfaked robocall pretending to be President Biden  Further reading on the election deepfake in Slovakia  Further reading on the President Obama lip-syncing deepfake from 2017  One of several deepfake quizzes from the New York Times, test yourself!  The Partnership on AI  C2PA Witness.org  Truepic   RECOMMENDED YUA EPISODES ‘We Have to Get It Right’: Gary Marcus On Untamed AI Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet Synthetic Humanity: AI & What’s At Stake   CLARIFICATION: Oren said that the largest social media platforms “don’t see a responsibility to let the public know this was manipulated by AI.” Meta has made a public commitment to flagging AI-generated or -manipulated content. Whereas other platforms like TikTok and Snapchat rely on s to flag.
Internet y tecnología 7 meses
0
0
11
25:36
'A Turning Point in History': Yuval Noah Harari on AI’s Cultural Takeover
'A Turning Point in History': Yuval Noah Harari on AI’s Cultural Takeover
Historian Yuval Noah Harari says that we are at a critical turning point. One in which AI’s ability to generate cultural artifacts threatens humanity’s role as the shapers of history. History will still go on, but will it be the story of people or, as he calls them, ‘alien AI agents’? In this conversation with Aza Raskin, Harari discusses the historical struggles that emerge from new technology, humanity’s AI mistakes so far, and the immediate steps lawmakers can take right now to steer us towards a non-dystopian future. This episode was recorded live at the Commonwealth Club World Affairs of California. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ RECOMMENDED MEDIA NEXUS: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari  You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills: a New York Times op-ed from 2023, written by Yuval, Aza, and Tristan  The 2023 open letter calling for a pause in AI development of at least 6 months, signed by Yuval and Aza  Further reading on the Stanford Marshmallow Experiment Further reading on AlphaGo’s “move 37”  Further Reading on Social.AI RECOMMENDED YUA EPISODES This Moment in AI: How We Got Here and Where We’re Going The Tech We Need for 21st Century Democracy with Divya Siddarth Synthetic Humanity: AI & What’s At Stake The AI Dilemma Two Million Years in Two Hours: A Conversation with Yuval Noah Harari
Internet y tecnología 8 meses
0
0
11
01:30:41
‘We Have to Get It Right’: Gary Marcus On Untamed AI
‘We Have to Get It Right’: Gary Marcus On Untamed AI
It’s a confusing moment in AI. Depending on who you ask, we’re either on the fast track to AI that’s smarter than most humans, or the technology is about to hit a wall and the bubble will burst. Gary Marcus is in the latter camp. He’s a cognitive psychologist and computer scientist who built his own successful AI start-up. But he’s also been called AI’s loudest critic. On Your Undivided Attention this week, Gary sits down with CHT Executive Director Daniel Barcay to defend his skepticism of generative AI and to discuss what we need to do as a society to get the rollout of this technology right… which is the focus of his new book, Taming Silicon Valley: How We Can Ensure That AI Works for Us. The bottom line: No matter how quickly AI progresses, Gary argues that our society is woefully unprepared for the risks that will come from the AI we already have. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_   RECOMMENDED MEDIA Link to Gary’s book: Taming Silicon Valley: How We Can Ensure That AI Works for Us Further reading on the deepfake of the CEO of India's National Stock Exchange Further reading on the deepfake of of an explosion near the Pentagon. The study Gary cited on AI and false memories. Footage from Gary and Sam Altman’s Senate testimony.   RECOMMENDED YUA EPISODES Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet No One is Immune to AI Harms with Dr. Joy Buolamwini   Correction: Gary mistakenly listed the reliability of GPS systems as 98%. The federal government’s standard for GPS reliability is 95%.
Internet y tecnología 8 meses
0
0
11
41:43
AI Is Moving Fast. We Need Laws that Will Too.
AI Is Moving Fast. We Need Laws that Will Too.
AI is moving fast. And as companies race to rollout newer, more capable models–with little regard for safety–the downstream risks of those models become harder and harder to counter. On this week’s episode of Your Undivided Attention, CHT’s policy director Casey Mock comes on the show to discuss a new legal framework to incentivize better AI, one that holds AI companies liable for the harms of their products.  Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ RECOMMENDED MEDIA The CHT Liability Framework  Further Reading on Air Canada’s Chatbot Fiasco  Further Reading on the Elon Musk Deep Fake Scams  The Full Text of SB1047, California’s AI Regulation Bill  Further reading on SB1047  RECOMMENDED YUA EPISODES Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn  Can We Govern AI? with Marietje Schaake  A First Step Toward AI Regulation with Tom Wheeler Correction: Casey incorrectly stated the year that the US banned child labor as 1937. It was banned in 1938.
Internet y tecnología 8 meses
0
0
12
39:09
Esther Perel on Artificial Intimacy (rerun)
Esther Perel on Artificial Intimacy (rerun)
[This episode originally aired on August 17, 2023] For all the talk about AI, we rarely hear about how it will change our relationships. As we swipe to find love and consult chatbot therapists, acclaimed psychotherapist and relationship expert Esther Perel warns that there’s another harmful “AI” on the rise — Artificial Intimacy — and how it is depriving us of real connection. Tristan and Esther discuss how depending on algorithms can fuel alienation, and then imagine how we might design technology to strengthen our social bonds. RECOMMENDED MEDIA  Mating in Captivity by Esther Perel Esther's debut work on the intricacies behind modern relationships, and the dichotomy of domesticity and sexual desire The State of Affairs by Esther Perel Esther takes a look at modern relationships through the lens of infidelity Where Should We Begin? with Esther Perel Listen in as real couples in search of help bare the raw and profound details of their stories How’s Work? with Esther Perel Esther’s podcast that focuses on the hard conversations we're afraid to have at work  Lars and the Real Girl (2007) A young man strikes up an unconventional relationship with a doll he finds on the internet Her (2013) In a near future, a lonely writer develops an unlikely relationship with an operating system designed to meet his every need RECOMMENDED YUA EPISODES Big Food, Big Tech and Big AI with Michael Moss The AI Dilemma The Three Rules of Humane Tech Digital Democracy is Within Reach with Audrey Tang   CORRECTION: Esther refers to the 2007 film Lars and the Real Doll. The title of the film is Lars and the Real Girl.   Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Internet y tecnología 9 meses
0
0
8
44:52
Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins
Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins
Today, the tech industry is  the second-biggest lobbying power in Washington, DC, but that wasn’t true as recently as ten years ago. How did we get to this moment? And where could we be going next? On this episode of Your Undivided Attention, Tristan and Daniel sit down with historian Margaret O’Mara and journalist Brody Mullins to discuss how Silicon Valley has changed the nature of American lobbying.  Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ RECOMMENDED MEDIA The Wolves of K Street: The Secret History of How Big Money Took Over Big Government - Brody’s book on the history of lobbying. The Code: Silicon Valley and the Remaking of America - Margaret’s book on the historical relationship between Silicon Valley and Capitol Hill More information on the Google antitrust ruling More Information on KOSPA More information on the SOPA/PIPA internet blackout Detailed breakdown of Internet lobbying from Open Secrets   RECOMMENDED YUA EPISODES U.S. Senators Grilled Social Media CEOs. Will Anything Change? Can We Govern AI? with Marietje Schaake The Race to Cooperation with David Sloan Wilson   CORRECTION: Brody Mullins refers to AT&T as having a “hundred million dollar” lobbying budget in 2006 and 2007. While we couldn’t the size of their budget for lobbying, their actual lobbying spend was much less than this: $27.4m in 2006 and $16.5m in 2007, according to OpenSecrets.   The views expressed by guests appearing on Center for Humane Technology’s podcast, Your Undivided Attention, are their own, and do not necessarily reflect the views of CHT. CHT does not or oppose any candidate or party for election to public office  
Internet y tecnología 9 meses
0
0
9
43:59
También te puede gustar Ver más
The Future of Life
The Future of Life The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles. Actualizado
Singularity University Radio
Singularity University Radio Explore the technologies shaping our future with the Singularity Discussion Series, from Singularity University. In each episode, leading experts dive deep into the current and future implications of exponential technologies—from AI and biotech to blockchain and quantum computing—and how their convergence is transforming industries, economies, and everyday life. us as we uncover the breakthroughs redefining what's possible and explore how leaders, innovators, and society can navigate this accelerating future. Visit su.org/events for to us Live on Zoom for future conversations. Actualizado
CiscoChat Podcast
CiscoChat Podcast The Cisco Podcast Network is a collective of podcasts from across Cisco spanning technology to culture and everything in between. Hear from Cisco customers, partners, and Cisco insiders on the topics that matter most to you. Email us with your and suggestions. Discover your favorite playlists today! Actualizado
Ir a Internet y tecnología