Can Robots Be Trusted?

Robby the Robot embodied Asimov’s robotic laws. The robot also provided an early demonstration of self-driving cars.  Watch this a clip from the 1956 movie, "Forbidden Planet," which introduced Robby the Robot to the public.
Robby the Robot embodied Asimov’s robotic laws. The robot also provided an early demonstration of self-driving cars. Watch this a clip from the 1956 movie, “Forbidden Planet,” which introduced Robby the Robot to the public.

In “Self-driving cars and the Trolley problem,” Tanay Jaipuria provides an interesting and valuable examination of some of the dilemmas posed by trusting automatons such as self-driving vehicles to take care of people:

…[C]an you imagine a world in which say Google or Apple places a value on each of our lives, which could be used at any moment of time to turn a car into us to save others? Would you be okay with that?

But there is another variation on the “trolley problem” I created several years ago that illustrates a gap in the argument presented by Jaipuria:

You are driving across a bridge in the middle of the city. Traffic is light. Yours is actually the only car in sight. As you approach the cusp of the bridge, you see a fat man climbing over the railing, evidently about to jump off. Being a Good Samaritan, you stop your car, run over, grab the fat man’s coat, and try to pull him back. He struggles and cries “Let me go!” As he looks down from the railing he suddenly slumps to the deck and moans, “Too late!” Puzzled, you look over the railing and see a runaway trolley car hurtling downhill toward five children playing on the track. You realize the fat man was bravely attempting to hurl his huge body onto the track to stop the trolley and save the children.

This idealized scenario illustrates the very common dilemma of unintended consequences. And that stems from having incomplete information, and thus an incomplete model of the problem to be solved. When the problem is a critical situation that urgently demands quick decisions under often chaotic circumstances, the conundrum is compounded. Requiring not only efficient by ethically justifiable outcomes only aggravates the difficulty.

Some may conclude that automatons cannot or at least should not be trusted to make such nuanced but freighted decisions.

But the practical issue comes down to whether automatons — often just in the form of decision algorithms — lead to better outcomes overall than humans commonly attain. Moreover, “better” doesn’t mean theoretically correct but just socially acceptable: that is, what Herbert Simon called “satisficing.

People ask me, ‘Are you concerned about self-driving cars?’ And I say, ‘Yes, but I’m terrified of today’s cars.’ Thirty thousand people die every year on U.S. roads, and over a million are injured. Ninety-four percent of those crashes are caused by human error. — Bryant Walker Smith

Actually, there are some reasons to expect that automatons are likely to make better decisions, on average, in critical situations than people commonly do. For one thing, with decent computing power, the automatons usually can analyze more information from more sources faster than the human brain can. Second, they can access more kinds of sensory information from more sensory devices than the human can. Third, connected to distributed information networks, they can at least potentially have a much broader scope of “situational awareness” than the human can access and absorb.

The Overload Problem

That last point, “absorb,” is significant. The human mind — as most of us can readily attest in the wired world we now occupy — is prone to being stymied by information overload. This is a problem that has increasingly vexed designers of complex operating systems (the electrical grid, air traffic control) and military weapons systems.

For instance, the F-35 Lightning II strike fighter is one of the most complex (and expensive) weapons systems ever built. To adapt the aircraft’s immensely complicated systems, and the fire hydrant of data they constantly gush, to human limitations, BAE designers developed the Striker II helmet — a $400,000 piece of equipment that actually enables the pilot to “see through” the airplane.

Because the human is increasingly a limiting factor to system performance, there is growing impetus not only in the military but aviation more broadly to replace the human role with automatons. For instance, the New York Times recently reported:

 This summer, the Defense Advanced Research Projects Agency, the Pentagon research organization, will take the next step in plane automation with the Aircrew Labor In-Cockpit Automation System, or Alias. Sometime this year, the agency will begin flight testing a robot that can be quickly installed in the right seat of military aircraft to act as the co-pilot. The portable onboard robot will be able to speak, listen, manipulate flight controls and read instruments.

The machine, a bit like R2D2, will have many of the skills of a human pilot, including the ability to land the plane and to take off. It will assist the human pilot on routine flights and be able to take over the flight in emergency situations…

NASA is exploring a related possibility: moving the co-pilot out of the cockpit on commercial flights, and instead using a single remote operator to serve as co-pilot for multiple aircraft.

Meanwhile, the latest issue of Popular Science foresees even more advanced military aircraft now in the development pipeline:

…with regional threats growing and portable surface-to-air missiles evolving, engineers have once again set out to build the fastest military jet on the planet.

This time, it will take the form of a 4,000-mile-per-hour reconnaissance drone with strike capability. Known as the SR-72, the aircraft will evade assault, take spy photos, and attack targets at speeds of up to Mach 6. That’s twice as fast as its predecessor.

As the word “drone” above suggests, the aircraft has no cockpit and no pilot.

A Bigger Quandary

There may well be a more difficult question than whether automatons can be trusted to make better decisions than humans do. That is: What happens when the automatons can and do? Serious problems are likely to arise when automated systems perform important tasks more accurately, more effectively, and more reliably than humans — most of the time.

That “most of the time” is where the rub is. The issue comes back to my trolley problem posed above.

No matter how much data automatons can access, no matter how many variables they can consider simultaneously, and no matter how fast their processors can hum, their decisions are based on information and on models that may be incomplete and thus dangerously flawed. Not only “may be” but inevitably will be. Taking off on George Box’s oft-cited admonition: Even though some models may be very useful, “all models are wrong.”

Murphy said, “Anything that can go wrong, will go wrong.” But things that supposedly can’t go wrong do too.

Even when an automated system is so nearly flawless that the probability of it failing seems very small — and indeed far less likely than alternative, human errors commonly are — if it handles a big enough mass of data and performs a sufficiently large number of operations, grievous failures are going to happen.

But because the brilliant automaton so often gets things right, and generally performs so much more reliably than people do, the capabilities of the people who depend on such robotic servants are prone to decline, if only from lack of use. Worse, the people whose labor is supposedly “augmented” by their automaton collaborators tend to lose confidence in their own ability. In a pinch, the humans become inclined to defer to the automated expert, and are reluctant to challenge its decisions.

This is not merely a hypothetical problem.

In a disturbing case study presented in a series of five articles in Medium, Dr. Bob Wachter recounts how a leading-edge automated system designed to eliminate medication errors at the UCSF hospital (where Wachter works) actually led to one teenage patient receiving a 38-times overdose of an antibiotic that nearly killed him.

By now, my jaw was somewhere on the floor. I was amazed that this could happen in one of America’s top hospitals, equipped with the best healthcare information technology that money can buy. — Bob Wachter

The gist of Wachter’s account (well worth reading in its entirety) is that the automated system designed to stop medication errors — which cause several hundred thousand deaths each year in the U.S. — had several layers of defenses to catch mistakes before they could do harm. But each was like a slice of Swiss cheese, with some holes. Most of the time, a mistake that slipped through the gap in one layer would be stopped by another. And in fact, since the system was implemented, it had intercepted and fixed many medical glitches. Overall, it had significantly reduced the number of patients harmed by getting the wrong medicine or the wrong dosage.

There was still a possibility though that sometimes the holes in the layers would in effect line up, allowing a blunder to go unchecked. As more and more cases and prescriptions were handled by the automated system, it became increasingly likely that a seemingly improbable convergence of gaps would occur. And that is just what happened in the case Wachter reports.

Reading the article, one wonders nevertheless how the human professionals responsible for the patient’s care — the doctor, the pharmacist, and the nurse who administered the toxic dose of antibiotic — could all have allowed such an egregiously excessive overdose of medicine to be given to the patient.

Wachter goes into great detail tracing how that happened, including each one’s view of what they did and why. It came down to being lulled by that seductive trust highly reliable automated experts inspire — which leads concurrently to a reluctance to question or push back.

What to Do

Jaipuria’s article chews over several of the ethical dilemmas posed by relying on self-driving cars, and presumably other robotic servants. But he concludes with more questions than answers.

For Wachter the problems are not just abstract. As a doctor who was so close to a failure that nearly killed a young patient in his hospital, he is determined not just to diagnose but to find a solution to prescribe.

For the issues in that case, he proposes basically two fixes. One, based on experience in the aviation industry, includes steps to reduce distractions and information overload — improving active situational awareness.

The other, adapted from established manufacturing practices like the Toyota production system, is to provide some kind of “stop the line” button for every person involved in the process. Not only that, but to institutionalize using it. This is analogous to the slogan promoted for homeland security: “If you see something, say something.”

That requires a culture quite the opposite of the traditional “Don’t make waves,” and “Go along to get along.” Instead, every person involved in the process, regardless of title or status, must take personal responsibility for the safety and reliability of the whole venture. So if you have any concern that something’s wrong, you must act: Push the stop button, speak up, do something. At the same time, the organization or community as a whole has a collateral obligation to expect such individual initiative and to reward it.

Wachter’s prescription is constructive, but not complete. It is not a panacea for all the dilemmas posed by the growing employment of and dependence on ever more intelligent automatons. But it points in the right direction.

Whatever the solutions to the conundrums are, simply forgoing or prohibiting deployment of canny automatons will rarely if ever be an option. As the quote from Bryant Walker Smith above suggests, there are substantial opportunity costs from not using advanced technology. So most of the public in most cases is likely to view the benefits that automatons can provide — saving lives, improving health, enhancing safety, increasing wealth, and more — as outweighing their collateral risks.

Consider the recent crash of an Amtrak train en route from Philadelphia to New York. The deadly derailment, which killed eight people and injured scores of others, occurred because the train entered a curve in the track at over 100 miles an hour, more than double the speed limit. (Exactly why this happened is being investigated.)

An automated system called Positive Train Control (PTC) is designed to override the human engineer and stop a train that is in danger. The Amtrak train derailed in the one portion of the route where PTC had not yet been installed.

In the wake of the disaster, The Hill reported:

This week’s deadly Amtrak derailment is sparking fresh calls for automated trains on the nation’s rails, even as industry groups press for an extension of this year’s deadline to implement the technology.

Originally published in Medium (June 16, 2015)