One can’t help but feel a modicum of disappointment at the recent proclamation by OSU computer science professor Thomas Dietterich that we shouldn’t fear a Terminator-esque rise of artificial intelligence. On one hand, not that many people outside Michael Biehn, Tesla founder Elon Musk, and Bill Gates are really that scared of this concept, particularly not after seeing the second and third Matrix films. On the other hand, serving robot overlords would undeniably suck. But the disappointment comes from the fact that much less sexy forms of disaster, such as cyber-attacks, basic ineptitude, and hardware malfunction, are far more likely to doom us all.
I mean, given the choice between Skynet and WikiLeaks, I feel like we all would prefer Skynet.
“We’re now talking about doing some pretty difficult and exciting things with AI, such as automobiles that drive themselves, or robots that can effect rescues or operate weapons,” Dietterich said in a press release. “These are high-stakes tasks that will depend on enormously complex algorithms. The biggest risk is that those algorithms may not always work. We need to be conscious of this risk and create systems that can still function safely even when the AI components commit errors.”
Musk and Gates recently made news with their apocalyptic fears that AI would lead to us all being turned into slaves, or batteries, or mannequins for robot clothing in some sort of elaborate robotic shopping mall scenario, but leave it to the eggheads at OSU to throw cold water on all our fun. Dietterich counsels caution, and encourages that we try to predict where AI might go wrong in the future to make sure there are fail-safes in place to prevent disaster.
No robot enslavement for you, John Connors of Corvallis; looks like you’ll just have to settle for being hit by a driverless car.