From self-driving cars and virtual assistants to personalized recommendations and medical diagnostics, artificial intelligence (AI) technology is changing our world. It is altering the ways we conduct business, govern society, provide healthcare, and communicate with and understand one another. But alongside AI’s potential to improve lives comes a set of profound ethical questions concerning privacy and freedom, moral rights and responsibilities, and the very nature of reality. How do we harness the benefits of AI and machine learning while ensuring its ethical use for the benefit of all?
On May 4, over 200 members of the Yale community gathered in Chicago to hear from Yale experts who are tackling big questions around AI. The conversation was timely—earlier that same day, leaders of four prominent American technology companies met with US officials at the White House to discuss responsible innovation in AI.
The latest installment in the For Humanity Illuminated event series, “Artificial Intelligence, Ethics, and Society: Utilizing Technology for Good” was held at Aon Grand Ballroom at Navy Pier. The evening’s program featured five speakers approaching AI through their expertise in philosophy, law, anthropology, religion, and computer science. Suzanne Gignilliat ’80, chair of the Midwest Regional Advisory Committee for the For Humanity campaign, kicked off the evening with introductory remarks.
Expertise Across Disciplines
Scott Shapiro ’90 JD is the Charles F. Southmayd Professor of Law and professor of philosophy at Yale Law School. He also directs the Center for Law and Philosophy, which aims to leverage our knowledge of human behavior to understand the machines we create. Shapiro and his team consider questions such as who should be held accountable if a self-driving car injures or kills someone. By adapting what we know about natural intelligence to artificial intelligence, his team has built a tool that can determine the “intentions” of a self-driving car.
An assistant professor in Yale’s anthropology department, Lisa Messeri describes herself as an anthropologist of technology. She examines the impact of emerging technologies on society, including AI tools that blur our understanding of the real. Messeri considers technology as a product of human ingenuity and studies the ways that technology and culture influence one another. One area of her research focuses on empathetic virtual reality devices, which are designed to help people understand the lives of others—for example, VR that lets caregivers of elderly or disabled people see the world through the eyes of those they are caring for.
If technology continues to advance rapidly, could AI achieve sentience? Could it eventually be capable of suffering or flourishing? At some point, should we be concerned with the well-being of AI entities themselves? These are some of the big questions that John Pittard ’13 PhD, associate professor of philosophy of religion at Yale Divinity School, is exploring. Pittard studies the nature of consciousness and the philosophy of religion and epistemology.
The fastest computer in world today is more than 18 million times faster than what was considered the fastest thirty years ago, noted Rajit Manohar, the John C. Malone Professor of Electrical Engineering and professor of computer science. But we have now reached the limit of what can be done with current silicon technology. Manohar and his team are leveraging our knowledge of how the human brain works to build more efficient computers that mimic simple models of neurons and synapses. They are also collaborating with a team at Yale School of Medicine on customizable chips that could be implanted in specific areas of the human brain to treat conditions like epilepsy.
Andi Peng ’17 is a third-year computer science PhD student and robotics researcher at MIT. She is interested in physical intelligence, and much of her work concerns how robots can and should interact with humans in the physical world. Peng noted that we currently train AI using statistics—predicting the most likely next word in a sentence, for example. But training a computer to perform complex computations is in many ways simpler than teaching a robot to perform basic activities and navigate physical space. Peng is interested in how we can simulate the physical world to teach robots to perform tasks.
For Socially Responsible Technology
Following their TED-style talks, the five speakers joined a panel discussion moderated by Tamar Gendler ’87, dean of the Faculty of Arts and Sciences. In conversation, the panelists further discussed the unexpected challenges of AI. Gendler encouraged the audience to think about their own hopes and fears around AI and to share them with one another at the reception following the presentations.
Yale Club of Chicago President Cynthia Okechukwu ’08, ’13 JD introduced President Peter Salovey ’86 PhD, who closed the program with reflections on Yale’s role in addressing the technical, ethical, and social considerations of new AI technologies, noting that the For Humanity campaign will help support this important work.
“Developing and understanding AI can’t be done by computer scientists alone,” Salovey said. “Computer scientists must work with ethicists, economists, psychologists, and other experts to advance AI so that it will improve lives. These are areas where Yale has distinct strengths.
“The For Humanity campaign aims to fund new research and collaborations across multiple disciplines to meet the challenge and the promise of AI, for this and future generations.”
Next: London, Washington, DC, and Boston
For Humanity Illuminated travels to London on June 21, Washington, DC, on October 23, and Boston on November 9. Visit the For Humanity Illuminated page to stay up to date on upcoming events and watch recordings of past programs.