I disagree with several premises in this paper, most notably that we will need to be able to ‘predict where failures might occur’ in operating AI systems, like driverless cars.
Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.
Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.
The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.
But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will.
On the contrary, considering how many people are killed by human beings operating cars and trucks, my sense is that any solution that offers the combination of lowered mortality rates, increased convenience, and lower cost will be accepted as a gift from the gods.
Consider that most everything that people do – from something as instinctive as balancing as we walk down the street, to something as cerebral as playing go – has been learned much like deep learning works in AI. People can’t explain how they balance, and the world’s greatest go players don’t know why one move seems better than another.
The way we have been programming computers to date is the dumb and inflexible way, deterministically. That’s going to be old school very very soon.
Knight interviewed the cognitive scientist/philosopher Daniel Dennett, who offered the very tame observation
If it can’t do better than us at explaining what it’s doing, then don’t trust it.
This is a requirement we don’t use with trust of other people. People are remarkably bad at explaining what they do, or how they go about reasoning, but we manage to live in a world filled with people.
No, we will adapt to a world of opaque AI, doing what it does without us being able to peer into its circuitry or even to parse some logical precepts guiding its reactions. We will have to stick with the basic empirics, like ‘by their fruits shall you know them’.
And, of course, we will monitor what AIs are up to, using other AIs that we also don’t understand. Maybe that will make Knight and Dennett less afraid.
What do you think? Please participate in this short survey, Trusting AIs, (three questions), to see what members of the workfutures.io community think.