iVoox Podcast & radio
Descargar app gratis

Podcast
Towards Data Science 1p72l
Por The TDS team
131
77
Note: The TDS podcast's current run has ended. Researchers and business leaders at the forefront of the field unpack the most pressing questions around data science and AI. 5621f
Note: The TDS podcast's current run has ended.
Researchers and business leaders at the forefront of the field unpack the most pressing questions around data science and AI.
111. Mo Gawdat - Scary Smart: A former Google exec’s perspective on AI risk
Episodio en Towards Data Science
If you were scrolling through your newsfeed in late September 2021, you may have caught this splashy headline from The Times of London that read, “Can this man save the world from artificial intelligence?”. The man in question was Mo Gawdat, an entrepreneur and senior tech executive who spent several years as the Chief Business Officer at GoogleX (now called X Development), Google’s semi-secret research facility, that experiments with moonshot projects like self-driving cars, flying vehicles, and geothermal energy. At X, Mo was exposed to the absolute cutting edge of many fields — one of which was AI. His experience seeing AI systems learn and interact with the world raised red flags for him — hints of the potentially disastrous failure modes of the AI systems we might just end up with if we don’t get our act together now. Mo writes about his experience as an insider at one of the world’s most secretive research labs and how it led him to worry about AI risk, but also about AI’s promise and potential in his new book, Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World. He ed me to talk about just that on this episode of the TDS podcast.
01:00:12
110. Alex Turner - Will powerful AIs tend to seek power?
Episodio en Towards Data Science
Today’s episode is somewhat special, because we’re going to be talking about what might be the first solid quantitative study of the power-seeking tendencies that we can expect advanced AI systems to have in the future. For a long time, there’s kind of been this debate in the AI safety world, between: People who worry that powerful AIs could eventually displace, or even eliminate humanity altogether as they find more clever, creative and dangerous ways to optimize their reward metrics on the one hand, and People who say that’s Terminator-bating Hollywood nonsense that anthropomorphizes machines in a way that’s unhelpful and misleading. Unfortunately, recent work in AI alignment — and in particular, a spotlighted 2021 NeurIPS paper — suggests that the AI takeover argument might be stronger than many had realized. In fact, it’s starting to look like we ought to expect to see power-seeking behaviours from highly capable AI systems by default. These behaviours include things like AI systems preventing us from shutting them down, repurposing resources in pathological ways to serve their objectives, and even in the limit, generating catastrophes that would put humanity at risk. As concerning as these possibilities might be, it’s exciting that we’re starting to develop a more robust and quantitative language to describe AI failures and power-seeking. That’s why I was so excited to sit down with AI researcher Alex Turner, the author of the spotlighted NeurIPS paper on power-seeking, and discuss his path into AI safety, his research agenda and his perspective on the future of AI on this episode of the TDS podcast. *** Intro music: ➞ Artist: Ron Gelinas ➞ Track Title: Daybreak Chill Blend (original mix) ➞ Link to Track: https://youtu.be/d8Y2sKIgFWc *** Chapters: - 2:05 Interest in alignment research - 8:00 Two camps of alignment research - 13:10 The NeurIPS paper - 17:10 Optimal policies - 25:00 Two-piece argument - 28:30 Relaxing certain assumptions - 32:45 Objections to the paper - 39:00 Broader sense of optimization - 46:35 Wrap-up
46:57
109. Danijar Hafner - Gaming our way to AGI
Episodio en Towards Data Science
Until recently, AI systems have been narrow — they’ve only been able to perform the specific tasks that they were explicitly trained for. And while narrow systems are clearly useful, the holy grain of AI is to build more flexible, general systems. But that can’t be done without good performance metrics that we can optimize for — or that we can at least use to measure generalization ability. Somehow, we need to figure out what number needs to go up in order to bring us closer to generally-capable agents. That’s the question we’ll be exploring on this episode of the podcast, with Danijar Hafner. Danijar is a PhD student in artificial intelligence at the University of Toronto with Jimmy Ba and Geoffrey Hinton and researcher at Google Brain and the Vector Institute. Danijar has been studying the problem of performance measurement and benchmarking for RL agents with generalization abilities. As part of that work, he recently released Crafter, a tool that can procedurally generate complex environments that are a lot like Minecraft, featuring resources that need to be collected, tools that can be developed, and enemies who need to be avoided or defeated. In order to succeed in a Crafter environment, agents need to robustly plan, explore and test different strategies, which allow them to unlock certain in-game achievements. Crafter is part of a growing set of strategies that researchers are exploring to figure out how we can benchmark and measure the performance of general-purpose AIs, and it also tells us something interesting about the state of AI: increasingly, our ability to define tasks that require the right kind of generalization abilities is becoming just as important as innovating on AI model architectures. Danijar ed me to talk about Crafter, reinforcement learning, and the big challenges facing AI researchers as they work towards general intelligence on this episode of the TDS podcast. *** Intro music: - Artist: Ron Gelinas - Track Title: Daybreak Chill Blend (original mix) - Link to Track: https://youtu.be/d8Y2sKIgFWc *** Chapters: 0:00 Intro 2:25 Measuring generalization 5:40 What is Crafter? 11:10 Differences between Crafter and Minecraft 20:10 Agent behavior 25:30 Merging scaled models and reinforcement learning 29:30 Data efficiency 38:00 Hierarchical learning 43:20 Human-level systems 48:40 Cultural overlap 49:50 Wrap-up
50:06
108. Let’s Talk AI — 2021: The (full) year in review
Episodio en Towards Data Science
2021 has been a wild ride in many ways, but its wildest features might actually be AI-related. We’ve seen major advances in everything from language modeling to multi-modal learning, open-ended learning and even AI alignment. So, we thought, what better way to take stock of the big AI-related milestones we’ve reached in 2021 than a cross-over episode with our friends over at the Let’s Talk AI podcast. *** Intro music: - Artist: Ron Gelinas - Track Title: Daybreak Chill Blend (original mix) - Link to Track: https://youtu.be/d8Y2sKIgFWc *** Chapters: 0:00 Intro 2:15 Rise of multi-modal models 7:40 Growth of hardware and compute 13:20 Reinforcement learning 20:45 Open-ended learning 26:15 Power seeking paper 32:30 Safety and assumptions 35:20 Intrinsic vs. extrinsic motivation 42:00 Mapping natural language 46:20 Timnit Gebru’s research institute 49:20 Wrap-up
50:21
107. Kevin Hu - Data observability and why it matters
Episodio en Towards Data Science
Imagine for a minute that you’re running a profitable business, and that part of your sales strategy is to send the occasional mass email to people who’ve signed up to be on your mailing list. For a while, this approach leads to a reliable flow of new sales, but then one day, that abruptly stops. What happened? You pour over logs, looking for an explanation, but it turns out that the problem wasn’t with your software; it was with your data. Maybe the new intern accidentally added a character to every email address in your dataset, or shuffled the names on your mailing list so that Christina got a message addressed to “John”, or vice-versa. Versions of this story happen surprisingly often, and when they happen, the cost can be significant: lost revenue, disappointed customers, or worse — an irreversible loss of trust. Today, entire products are being built on top of datasets that aren’t monitored properly for critical failures — and an increasing number of those products are operating in high-stakes situations. That’s why data observability is so important: the ability to track the origin, transformations and characteristics of mission-critical data to detect problems before they lead to downstream harm. And it’s also why we’ll be talking to Kevin Hu, the co-founder and CEO of Metaplane, one of the world’s first data observability startups. Kevin has a deep understanding of data pipelines, and the problems that cap pop up if you they aren’t properly monitored. He ed me to talk about data observability, why it matters, and how it might be connected to responsible AI on this episode of the TDS podcast. Intro music: ➞ Artist: Ron Gelinas ➞ Track Title: Daybreak Chill Blend (original mix) ➞ Link to Track: https://youtu.be/d8Y2sKIgFWc 0:00 Chapters: 0:00 Intro 2:00 What is data observability? 8:20 Difference between a dataset’s internal and external characteristics 12:20 Why is data so difficult to log? 17:15 Tracing back models 22:00 Algorithmic analyzation of a date 26:30 Data ops in five years 33:20 Relation to cutting-edge AI work 39:25 Software engineering and startup funding 42:05 Problems on a smaller scale 46:40 Future data ops problems to solve 48:45 Wrap-up
49:56
106. Yang Gao - Sample-efficient AI
Episodio en Towards Data Science
Historically, AI systems have been slow learners. For example, a computer vision model often needs to see tens of thousands of hand-written digits before it can tell a 1 apart from a 3. Even game-playing AIs like DeepMind’s AlphaGo, or its more recent descendant MuZero, need far more experience than humans do to master a given game. So when someone develops an algorithm that can reach human-level performance at anything as fast as a human can, it’s a big deal. And that’s exactly why I asked Yang Gao to me on this episode of the podcast. Yang is an AI researcher with affiliations at Berkeley and Tsinghua University, who recently co-authored a paper introducing EfficientZero: a reinforcement learning system that learned to play Atari games at the human-level after just two hours of in-game experience. It’s a tremendous breakthrough in sample-efficiency, and a major milestone in the development of more general and flexible AI systems. --- Intro music: ➞ Artist: Ron Gelinas ➞ Track Title: Daybreak Chill Blend (original mix) ➞ Link to Track: https://youtu.be/d8Y2sKIgFWc --- Chapters: - 0:00 Intro - 1:50 Yang’s background - 6:00 MuZero’s activity - 13:25 MuZero to EfficiantZero - 19:00 Sample efficiency comparison - 23:40 Leveraging algorithmic tweaks - 27:10 Importance of evolution to human brains and AI systems - 35:10 Human-level sample efficiency - 38:28 Existential risk from AI in China - 47:30 Evolution and language - 49:40 Wrap-up
49:53
105. Yannic Kilcher - A 10,000-foot view of AI
Episodio en Towards Data Science
There once was a time when AI researchers could expect to read every new paper published in the field on the arXiv, but today, that’s no longer the case. The recent explosion of research activity in AI has turned keeping up to date with new developments into a full-time job. Fortunately, people like YouTuber, ML PhD and sunglasses enthusiast Yannic Kilcher make it their business to distill ML news and papers into a digestible form for mortals like you and me to consume. I highly recommend his channel to any TDS podcast listeners who are interested in ML research — it’s a fantastic resource, and literally the way I finally managed to understand the Attention is All You Need paper back in the day. Yannic is ed me to talk about what he’s learned from years of following, reporting and doing AI research, including the trends, the challenges and the opportunities that he expects are going to shape the course of AI history in coming years. --- Intro music: ➞ Artist: Ron Gelinas ➞ Track Title: Daybreak Chill Blend (original mix) ➞ Link to Track: https://youtu.be/d8Y2sKIgFWc --- Chapters: - 0:00 Intro - 1:20 Yannic’s path into ML - 7:25 Selecting ML news - 11:45 AI ethics → political discourse - 17:30 AI alignment - 24:15 Malicious uses - 32:10 Impacts on persona - 39:50 Bringing in human thought - 46:45 Math with big numbers - 51:05 Metrics for generalization - 58:05 The future of AI - 1:02:58 Wrap-up
01:03:04
104. Ken Stanley - AI without objectives
Episodio en Towards Data Science
Today, most machine learning algorithms use the same paradigm: set an objective, and train an agent, a neural net, or a classical model to perform well against that objective. That approach has given good results: these types of AI can hear, speak, write, read, draw, drive and more. But they’re also inherently limited: because they optimize for objectives that seem interesting to humans, they often avoid regions of parameter space that are valuable, but that don’t immediately seem interesting to human beings, or the objective functions we set. That poses a challenge for researchers like Ken Stanley, whose goal is to build broadly superintelligent AIs — intelligent systems that outperform humans at a wide range of tasks. Among other things, Ken is a former startup founder and AI researcher, whose career has included work in academia, at UberAI labs, and most recently at OpenAI, where he leads the open-ended learning team. Ken ed me to talk about his 2015 book Greatness Cannot Be Planned: The Myth of the Objective, what open-endedness could mean for humanity, the future of intelligence, and even AI safety on this episode of the TDS podcast.
01:06:27
103. Gillian Hadfield - How to create explainable AI regulations that actually make sense
Episodio en Towards Data Science
It’s no secret that governments around the world are struggling to come up with effective policies to address the risks and opportunities that AI presents. And there are many reasons why that’s happening: many people — including technical people — think they understand what frontier AI looks like, but very few actually do, and even fewer are interested in applying their understanding in a government context, where salaries are low and stock compensation doesn’t even exist. So there’s a critical policy-technical gap that needs bridging, and failing to address that gap isn’t really an option: it would mean flying blind through the most important test of technological governance the world has ever faced. Unfortunately, policymakers have had to move ahead with regulating and legislating with that dangerous knowledge gap in place, and the result has been less-than-stellar: widely criticized definitions of privacy and explainability, and definitions of AI that create exploitable loopholes are among some of the more concerning results. Enter Gillian Hadfield, a Professor of Law and Professor of Strategic Management and Director of the Schwartz Reisman Institute for Technology and Society. Gillian’s background is in law and economics, which has led her to AI policy, and definitional problems with recent and emerging regulations on AI and privacy. But — as I discovered during the podcast — she also happens to be related to Dyllan Hadfield-Menell, an AI alignment researcher whom we’ve had on the show before. Partly through Dyllan, Gillian has also been exploring how principles of AI alignment research can be applied to AI policy, and to contract law. Gillian ed me to talk about all that and more on this episode of the podcast. --- Intro music: - Artist: Ron Gelinas - Track Title: Daybreak Chill Blend (original mix) - Link to Track: https://youtu.be/d8Y2sKIgFWc --- Chapters: 1:35 Gillian’s background 8:44 Layers and governments’ legislation 13:45 Explanations and justifications 17:30 Explainable humans 24:40 Goodhart’s Law 29:10 Bringing in AI alignment 38:00 GDPR 42:00 Involving technical folks 49:20 Wrap-up
51:07
102. Wendy Foster - AI ethics as a experience challenge
Episodio en Towards Data Science
AI ethics is often treated as a dry, abstract academic subject. It doesn’t have the kinds of consistent, unifying principles that you might expect from a quantitative discipline like computer science or physics. But somehow, the ethics rubber has to meet the AI road, and where that happens — where real developers have to deal with real s and apply concrete ethical principles — is where you find some of the most interesting, practical thinking on the topic. That’s why I wanted to speak with Wendy Foster, the Director of Engineering and Data Science at Shopify. Wendy’s approach to AI ethics is refreshingly concrete and actionable. And unlike more abstract approaches, it’s based on clear principles like empowerment: the idea that you should avoid forcing s to make particular decisions, and instead design interfaces that frame AI-recommended actions as suggestions that can be ignored or acted on. Wendy ed me to discuss her practical perspective on AI ethics, the importance of experience design for AI products, and how responsible AI gets baked into product at Shopify on this episode of the TDS podcast. --- Intro music: - Artist: Ron Gelinas - Track Title: Daybreak Chill Blend (original mix) - Link to Track: https://youtu.be/d8Y2sKIgFWc --- Chapters: - 0:00 Intro - 1:40 Wendy’s background - 4:40 What does practice mean? - 14:00 Different levels of explanation - 19:05 Trusting the system - 24:00 Training new folks - 30:02 Company culture - 34:10 The core of AI ethics - 40:10 Communicating with the - 44:15 Wrap-up
44:36
101. Ayanna Howard - AI and the trust problem
Episodio en Towards Data Science
Over the last two years, the capabilities of AI systems have exploded. AlphaFold2, MuZero, CLIP, DALLE, GPT-3 and many other models have extended the reach of AI to new problem classes. There’s a lot to be excited about. But as we’ve seen in other episodes of the podcast, there’s a lot more to getting value from an AI system than jacking up its capabilities. And increasingly, one of these additional missing factors is becoming trust. You can make all the powerful AIs you want, but if no one trusts their output — or if people trust it when they shouldn’t — you can end up doing more harm than good. That’s why we invited Ayanna Howard on the podcast. Ayanna is a roboticist, entrepreneur and Dean of the College of Engineering at Ohio State University, where she focuses her research on human-machine interactions and the factors that go into building human trust in AI systems. She ed me to talk about her research, its applications in medicine and education, and the future of human-machine trust. --- Intro music: - Artist: Ron Gelinas - Track Title: Daybreak Chill Blend (original mix) - Link to Track: https://youtu.be/d8Y2sKIgFWc --- Chapters: - 0:00 Intro - 1:30 Ayanna’s background - 6:10 The interpretability of neural networks - 12:40 Domain of machine-human interaction - 17:00 The issue of preference - 20:50 Gelman/newspaper amnesia - 26:35 Assessing a person’s persuadability - 31:40 Doctors and new technology - 36:00 Responsibility and ability - 43:15 The social pressure aspect - 47:15 Is Ayanna optimistic? - 53:00 Wrap-up
53:15
100. Max Jaderberg - Open-ended learning at DeepMind
Episodio en Towards Data Science
On the face of it, there’s no obvious limit to the reinforcement learning paradigm: you put an agent in an environment and reward it for taking good actions until it masters a task. And by last year, RL had achieved some amazing things, including mastering Go, various Atari games, Starcraft II and so on. But the holy grail of AI isn’t to master specific games, but rather to generalize — to make agents that can perform well on new games that they haven’t been trained on before. Fast forward to July of this year though and a team of DeepMind published a paper called “Open-Ended Learning Leads to Generally Capable Agents”, which takes a big step in the direction of general RL agents. ing me for this episode of the podcast is one of the co-authors of that paper, Max Jaderberg. Max came into the Google ecosystem in 2014 when they acquired his computer vision company, and more recently, he started DeepMind’s open-ended learning team, which is focused on pushing machine learning further into the territory of cross-task generalization ability. I spoke to Max about open-ended learning, the path ahead for generalization and the future of AI. --- Intro music by: ➞ Artist: Ron Gelinas ➞ Track Title: Daybreak Chill Blend (original mix) ➞ Link to Track: https://youtu.be/d8Y2sKIgFWc --- Chapters: - 0:00 Intro - 1:30 Max’s background - 6:40 Differences in procedural generations - 12:20 The qualitative side - 17:40 Agents’ mistakes - 20:00 Measuring generalization - 27:10 Environments and loss functions - 32:50 The potential of symbolic logic - 36:45 Two distinct learning processes - 42:35 Forecasting research - 45:00 Wrap-up
45:25
99. Margaret Mitchell - (Practical) AI ethics
Episodio en Towards Data Science
Bias gets a bad rap in machine learning. And yet, the whole point of a machine learning model is that it biases certain inputs to certain outputs — a picture of a cat to a label that says “cat”, for example. Machine learning is bias-generation. So removing bias from AI isn’t an option. Rather, we need to think about which biases are acceptable to us, and how extreme they can be. These are questions that call for a mix of technical and philosophical insight that’s hard to find. Luckily, I’ve managed to do just that by inviting onto the podcast none other than Margaret Mitchell, a former Senior Research Scientist in Google’s Research and Machine Intelligence Group, whose work has been focused on practical AI ethics. And by practical, I really do mean the nuts and bolts of how AI ethics can be baked into real systems, and navigating the complex moral issues that come up when the AI rubber meets the road. *** Intro music: ➞ Artist: Ron Gelinas ➞ Track Title: Daybreak Chill Blend (original mix) ➞ Link to Track: https://youtu.be/d8Y2sKIgFWc *** Chapters: - 0:00 Intro - 1:20 Margaret’s background - 8:30 Meta learning and ethics - 10:15 Margaret’s day-to-day - 13:00 Sources of ethical problems within AI - 18:00 Aggregated and disaggregated scores - 24:02 How much bias will be acceptable? - 29:30 What biases does the AI ethics community hold? - 35:00 The overlap of these fields - 40:30 The political aspect - 45:25 Wrap-up
45:43
98. Mike Tung - Are knowledge graphs AI’s next big thing?
Episodio en Towards Data Science
As impressive as they are, language models like GPT-3 and BERT all have the same problem: they’re trained on reams of internet data to imitate human writing. And human writing is often wrong, biased, or both, which means language models are trying to emulate an imperfect target. Language models often babble, or make up answers to questions they don’t understand. And it can make them unreliable sources of truth. Which is why there’s been increased interest in alternative ways to retrieve information from large datasets — approaches that include knowledge graphs. Knowledge graphs encode entities like people, places and objects into nodes, which are then connected to other entities via edges, which specify the nature of the relationship between the two. For example, a knowledge graph might contain a node for Mark Zuckerberg, linked to another node for Facebook, via an edge that indicates that Zuck is Facebook’s CEO. Both of these nodes might in turn be connected to dozens, or even thousands of others, depending on the scale of the graph. Knowledge graphs are an exciting path ahead for AI capabilities, and the world’s largest knowledge graphs are trained by a company called Diffbot, whose CEO Mike Tung ed me for this episode of the podcast to discuss where knowledge graphs can improve on more standard techniques, and why they might be a big part of the future of AI. --- Intro music by: ➞ Artist: Ron Gelinas ➞ Track Title: Daybreak Chill Blend (original mix) ➞ Link to Track: https://youtu.be/d8Y2sKIgFWc --- 0:00 Intro 1:30 The Diffbot dynamic 3:40 Knowledge graphs 7:50 Crawling the internet 17:15 What makes this time special? 24:40 Relation to neural networks 29:30 Failure modes 33:40 Sense of competition 39:00 Knowledge graphs for discovery 45:00 Consensus to find truth 48:15 Wrap-up
48:56
97. Anthony Habayeb - The present and future of AI regulation
Episodio en Towards Data Science
Corporate governance of AI doesn’t sound like a sexy topic, but it’s rapidly becoming one of the most important challenges for big companies that rely on machine learning models to deliver value for their customers. More and more, they’re expected to develop and implement governance strategies to reduce the incidence of bias, and increase the transparency of their AI systems and development processes. Those expectations have historically come from consumers, but governments are starting impose hard requirements, too. So for today’s episode, I spoke to Anthony Habayeb, founder and CEO of Monitaur, a startup focused on helping businesses anticipate and comply with new and AI regulations and governance requirements. Anthony’s been watching the world of AI regulation very closely over the last several years, and was kind enough to share his insights on the current state of play and future direction of the field. --- Intro music: ➞ Artist: Ron Gelinas ➞ Track Title: Daybreak Chill Blend (original mix) ➞ Link to Track: https://youtu.be/d8Y2sKIgFWc --- Chapters: - 0:00 Intro - 1:45 Anthony’s background - 6:20 Philosophies surrounding regulation - 14:50 The role of governments - 17:30 Understanding fairness - 25:35 AI’s PR problem - 35:20 Governments’ regulation - 42:25 Useful techniques for data science teams - 46:10 Future of AI governance - 49:20 Wrap-up
49:31
96. Jan Leike - AI alignment at OpenAI
Episodio en Towards Data Science
The more powerful our AIs become, the more we’ll have to ensure that they’re doing exactly what we want. If we don’t, we risk building AIs that use dangerously creative solutions that have side-effects that could be undesirable, or downright dangerous. Even a slight misalignment between the motives of a sufficiently advanced AI and human values could be hazardous. That’s why leading AI labs like OpenAI are already investing significant resources into AI alignment research. Understanding that research is important if you want to understand where advanced AI systems might be headed, and what challenges we might encounter as AI capabilities continue to grow — and that’s what this episode of the podcast is all about. My guest today is Jan Leike, head of AI alignment at OpenAI, and an alumnus of DeepMind and the Future of Humanity Institute. As someone who works directly with some of the world’s largest AI systems (including OpenAI’s GPT-3) Jan has a unique and interesting perspective to offer both on the current challenges facing alignment researchers, and the most promising future directions the field might take. --- Intro music: ➞ Artist: Ron Gelinas ➞ Track Title: Daybreak Chill Blend (original mix) ➞ Link to Track: https://youtu.be/d8Y2sKIgFWc --- Chapters: 0:00 Intro 1:35 Jan’s background 7:10 Timing of scalable solutions 16:30 Recursive reward modeling 24:30 Amplification of misalignment 31:00 Community focus 32:55 Wireheading 41:30 Arguments against the democratization of AIs 49:30 Differences between capabilities and alignment 51:15 Research to focus on 1:01:45 Formalizing an understanding of personal experience 1:04:04 OpenAI hiring 1:05:02 Wrap-up
01:05:17
95. sca Rossi - Thinking, fast and slow: AI edition
Episodio en Towards Data Science
The recent success of large transformer models in AI raises new questions about the limits of current strategies: can we expect deep learning, reinforcement learning and other prosaic AI techniques to get us all the way to humanlike systems with general reasoning abilities? Some think so, and others disagree. One dissenting voice belongs to sca Rossi, a former professor of computer science, and now AI Ethics Global Leader at IBM. Much of sca’s research is focused on deriving insights from human cognition that might help AI systems generalize better. sca ed me for this episode of the podcast to discuss her research, her thinking, and her thinking about thinking.
46:42
94. Divya Siddarth - Are we thinking about AI wrong?
Episodio en Towards Data Science
AI research is often framed as a kind of human-versus-machine rivalry that will inevitably lead to the defeat — and even wholesale replacement of — human beings by artificial superintelligences that have their own sense of agency, and their own goals. Divya Siddarth disagrees with this framing. Instead, she argues, this perspective leads us to focus on applications of AI that are neither as profitable as they could be, nor safe enough to prevent us from potentially catastrophic consequences of dangerous AI systems in the long run. And she ought to know: Divya is an associate political economist and social technologist in the Office of the CTO at Microsoft. She’s also spent a lot of time thinking about what governments can — and are — doing to shift the framing of AI away from centralized systems that compete directly with humans, and toward a more cooperative model, which would see AI as a kind of facilitation tool that gets leveraged by human networks. Divya points to Taiwan as an experiment in digital democracy that’s doing just that.
01:02:44
93. 2021: A year in AI (so far) - Reviewing the biggest AI stories of 2021 with our friends at the Let’s Talk AI p
Episodio en Towards Data Science
2020 was an incredible year for AI. We saw powerful hints of the potential of large language models for the first time thanks to OpenAI’s GPT-3, DeepMind used AI to solve one of the greatest open problems in molecular biology, and Boston Dynamics demonstrated their ability to blend AI and robotics in dramatic fashion. Progress in AI is accelerating exponentially, and though we’re just over halfway through 2021, this year is already turning into another one for the books. So we decided to partner with our friends over at Let’s Talk AI, a podcast co-hosted by Stanford PhD and former Googler Sharon Zhou, and Stanford PhD student Andrey Kurenkov, that covers current events in AI. This was a fun chat, and a format we’ll definitely be playing with more in the future :)
43:31
92. Daniel Filan - Peering into neural nets for AI safety
Episodio en Towards Data Science
Many AI researchers think it’s going to be hard to design AI systems that continue to remain safe as AI capabilities increase. We’ve seen already on the podcast that the field of AI alignment has emerged to tackle this problem, but a related effort is also being directed at a separate dimension of the safety problem: AI interpretability. Our ability to interpret how AI systems process information and make decisions will likely become an important factor in assuring the reliability of AIs in the future. And my guest for this episode of the podcast has focused his research on exactly that topic. Daniel Filan is an AI safety researcher at Berkeley, where he’s supervised by AI pioneer Stuart Russell. Daniel also runs AXRP, a podcast dedicated to technical AI alignment research.
01:06:02
También te puede gustar Ver más
Cisco Security Podcast Series Listen in on brief discussions among Cisco and industry experts about the latest issues in network security and the solutions available to help address your business challenges. Actualizado
The Banana Data Podcast Welcome to the Banana Data Podcast! We're a data science podcast focused on the latest & greatest of the DS ecosystem, sprinkled in with our musings & data science expertise. With topics ranging from ethical AI and transparency to robot pets, our hosts, Christopher Peter Makris & Corey Strausman, are here to keep you up to date on the latest trends, news, and big convos in data. If you're looking to keep the knowledge up, be sure to also subscribe to our weekly Banana Data Newsletter! here: https://banana-data.com/ Actualizado
Android Authority Podcast The Android Authority Podcast – discussing topics in Android every week. The Android Authority Podcast brings you all the top stories and features based on your favorite mobile operating system: Android. We help make Android more accessible to all, and dive deeper into the details when we can, there's something in here for everyone. Check out androidauthority.com for all the best news and reviews for your favorite phones and tablets, then our community forums to in on the discussion. Don't forget to hit our YouTube channel for even more. Actualizado