Stuart Russell

Professor of Computer Science UC Berkeley

Stuart Russell is Professor at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. He has served as an Adjunct Professor of Neurological Surgery at UC San Francisco and as Vice-Chair of the World Economic Forum's Council on AI and Robotics. He is a recipient of the Presidential Young Investigator Award of the National Science Foundation, the IJCAI Computers and Thought Award, the World Technology Award, the Mitchell Prize of the American Statistical Association, the Feigenbaum Prize of the Association for the Advancement of Artificial Intelligence, Outstanding Educator Awards from both ACM and AAAI, and the Andrew Carnegie Fellowship. He is an Honorary Fellow of Wadham College, Oxford; Distinguished Fellow of the Stanford Institute for Human-Centered AI; Associate Fellow of the Royal Institute for International Affairs (Chatham House); Fellow of AAAI, ACM, and AAAS. He held the Chaire Blaise Pascal in Paris from 2012 to 2014. His book "Artificial Intelligence: A Modern Approach", with Peter Norvig, is the standard text in AI; it has been translated into 14 languages and is used in over 1400 universities in 128 countries. His research covers a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, and philosophical foundations. He also works for the United Nations, developing a new global seismic monitoring system for the nuclear-test-ban treaty. His current concerns include the threat of autonomous weapons and the long-term future of artificial intelligence and its relation to humanity. The latter topic is the subject of his new book, "Human Compatible: AI and the Problem of Control" (Viking/Pengun, 2019).

How Not to Destroy the World with AI

August 30 5:30 pm - 6:30 pm (CEST)

I will briefly survey recent and expected developments in AI and their implications. Some are enormously positive, while others, such as the development of autonomous weapons and the replacement of humans in economic roles, may be negative. Beyond these, one must expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios. Should this be a cause for concern, as Elon Musk, Stephen Hawking, and others have suggested? And, if so, what can we do about it? While some in the mainstream AI community dismiss the issue, I will argue that the problem is real and that the technical aspects of it are solvable if we replace current definitions of AI with a version based on provable benefit to humans.