It is an issue that crosscuts every other policy issue.
Show Notes
It doesn’t look like we’re going to be able to put the generative artificial intelligence genie back in the bottle. But we might still be able to prevent some potential damage. Tools like Bard and ChatGPT are already being used in the workplace, educational settings, health care, scientific research, and all over social media. What kind of guardrails do we need to prevent bad actors from causing the worst imaginable outcomes? And who can put those protections in place and enforce them? A panel of AI experts from the 2023 Aspen Ideas Festival shares hopes and fears for this kind of technology, and discusses what can realistically be done by private, public and civil society sectors to keep it in check. Lila Ibrahim, COO of the Google AI company DeepMind, joins social science professor Alondra Nelson and IBM’s head of privacy and trust, Christina Montgomery, for a conversation about charting a path to ethical uses of AI. CNBC tech journalist Deirdre Bosa moderates the conversation and takes audience questions.
Explore
Related episodes
Scientists could actually be close to being able to decode animal communication and figure out what animals are saying to each other. And more astonishingly, we might even find ways to talk back. The study of sonic communication in animals is relatively new, and researchers have made a lot of headway over the past few decades with recordings and human analysis. But recent...
Artificial intelligence is making world-changing advances every day. But these powerful tools can be used for malicious and nefarious purposes just as easily as they can be used for good. How can society put guardrails on this technology to ensure that we build the most safe and responsible version of the future, where A.I. is assistive rather than weaponized? Google’s sen...
A technological future where our brain waves could be monitored and our thoughts decoded and analyzed — sometimes against our will — is not as far away as we think. But our existing legal protections and conception of human rights around cognitive liberty are trailing innovations in neurotechnology. Brain hacking tools and devices could bring massive benefits, for people s...
Artificial intelligence is clearly going to change our lives in multiple ways. But it’s not yet obvious exactly how, and what the impacts will be. We can predict that certain jobs held by humans will probably be taken over by computers, but what about our thoughts? Will we still think and create in the same ways? Author and former Aspen Institute president Walter Isaacson...
When Sal Khan created Khan Academy, he was trying to scale up the successful experiences he’d had tutoring his cousins one-on-one in math. He saw how effective it could be for students to go at their own pace, ask questions and be questioned about their reasoning, and he wanted to make those benefits available to as many kids as possible. The organization eventually grew t...