Interview with Stuart Russell: "AI Alignment and the Future of Humanity"
Dr. Stuart Russell is a Professor of Computer Science at the University of California, Berkeley and Adjunct Professor of Neurological Surgery at the University of California, San Francisco.
Dr. Russell’s research spans many areas of artificial intelligence, including machine learning, probabilistic reasoning, and philosophical foundations. Recently, his work has focused on ensuring that advanced AI is developed safely.
If we succeed, we’re going to need to…know how to control these systems. Otherwise, we’ll have a catastrophe.
Jump to a question:
Could you describe the unintended consequences that have resulted from recommendation algorithms?
Is the problem of misspecified objectives the same as the so-called “value alignment” problem?
Could you describe your principles of “Provably Beneficial AI?”
Why isn't it as simple as finding and satisfying the "human utility function?"
Would a morality that far surpasses our own even be recognizable to us as moral?
Could you explain what you’ve called the “Dr. Evil” problem?
How should autonomous cars trade off between the safety of drivers and pedestrians?
Do even narrow AI systems need to be “provably beneficial” in the sense you’ve described?
How should young people prepare themselves for the changing job landscape of the future?