Ten (Urgent) Ways to Prevent an AI Apocalypse
Duncan Cass-Beggs, Centre for International Governance Innovation
Friday, September 22, 2023
12:30 p.m. - 1:30 p.m.
307 Tier Building
Talk is FREE. All are welcome. No registration required.
The next generations of large language models could eventually achieve capabilities far exceeding humans. But AI developers have not yet developed reliable ways to control these advanced AI systems. We don’t know if it will ever be possible for a lesser intelligence (such as humans) to reliably control a vastly more intelligent entity. A powerful, uncontrollable AI could pose a significant danger to humanity. It could happen soon.
This talk by Duncan Cass-Beggs, Executive Director of the Global AI Risk Initiative at the Centre for International Governance Innovation will consider different scenarios for the coming years, in order to discuss the moral, political and legal challenges raised by the imminent prospect of super-intelligent AI.