Some time ago I heard the story of an eminent scientist (I don’t remember the discipline) who had developed a controversial ground-breaking idea, which had been under attack for several years. Each time it was attacked, he had been able to successfully repel the criticisms.
One day he was attending a symposium. One of the speakers had found a fatal flaw and spent about 20 minutes completely destroying his idea. The scientist walked onto the stage, shook the speaker’s hand, and said, “Thanks for revealing this fundamental error. Now I can stop defending it and turn my attention to doing something really useful.” The story is probably apocryphal, but how I wish it were true. Here is a story I heard just recently, which I know is not apocryphal because it comes from an eminently reliable source. As Peter Denning, Ubiquity’s editor-in-chief, recounts it:
“I was an assistant professor at Princeton. One day the seminar coordinator announced a special seminar. A famous professor was coming and would present a proof of a famous theorem about automata that had never been proved. Automata are largely self-operating machines.
“The speaker entered, and the room was full of excitement. He began to speak and write on the board. Two or three lines into what he wrote, a voice spoke up in the back of the audience. It was Jeff Ullman, our resident genius on all things automata. Jeff said: ‘I see what the problem is.’ The speaker fell silent.
“Jeff and the speaker then went off to huddle for a few minutes. The speaker came back and said to the audience: ‘Jeff is right. There is no theorem. The rest of this talk is canceled!'”
This is how the world should work, within science and anywhere else. Often it doesn’t. Facts are facts, opinions are opinions, and never the twain should meet. However too often not only do they meet, but they also tussle with one another.
For example, I recently read an account of two people discussing who succeeded Lyndon Johnson as President of the United States. The discussion quickly degenerated into a heated dispute, with red faces, waving arms, and cries of exasperation. One person insisted it was Gerald Ford and the other insisted it was Richard Nixon. After about 20 minutes, a third person entered the room and said, “Why don’t you just look it up?” And they did. It turned out that it was Richard Nixon who succeeded Lyndon Johnson.
The person who had been insisting that it was Gerald Ford wasn’t pleased. She prided herself on her knowledge of American history, so revealing the error was somewhat of a blow to her ego.
I see this sort of thing happening all the time.
- First, instead of just looking it up, which in today’s connected society requires virtually no effort, people almost come to blows about an easily verifiable fact. Arguing over an easily verifiable fact is just silliness. Either Richard Nixon succeeded Lyndon Johnson, or he didn’t. Just look it up.
- Second, the person who makes a mistake about an easily verifiable fact is often embarrassed by the exposure of their error and may resent the person who brought it to their (and perhaps other people’s) attention. Becoming resentful of someone who helps you uncover such an error is nonsensical. It’s an example of a psychological phenomenon that is probably best expressed in the distinctly non-psychological declaration, “I would rather be wrong than be corrected.”
Another example of this inimical attitude is seen in television family sitcoms. In most cases, these feature an impetuous, rash husband going off on some ill-conceived tangent, only at the end seeing the drastic situation he has created being put right by the long-suffering, logical wife.
Although an absurd stereotype, this dumb husband-logical wife scenario apparently makes for good entertainment.
I once saw the tables turned. In the final minutes of one episode, the rash, irrational husband turned out to be right and the logical, long-suffering wife turned out to be wrong. The show was filmed in front of a live audience. At this singular, unexpected turn of events, the audience went crazy with enthusiastic applause and cries of “Bravo! Well done! Well done!”
Permit me to cite a personal example of this phenomenon.
My father was a piping draftsman. His major occupation was drawing blueprints of pipes that would snake through a construction project and join up with each other. He could have been a full-fledged construction engineer; however, he grew up during the Great Depression and didn’t have the financial resources to complete his degree.
One day he noticed a couple of pipes he had been assigned to put into a blueprint would not match up. They would miss their meeting point by a meter or more. He went to his supervisor with the problem, but his alert was rejected. “If the engineer gave these specifications, then these are the specifications you need to use.” My father tried to argue, but quickly realized that as a draftsman he had no status.
However, when the lead engineer saw the blueprint, he was appalled. “How did this happen?” he thundered. “Get this corrected and get it corrected immediately!”
The engineer seemed to believe the draftsman had made a schoolboy error. The supervisor made no attempt to defend the draftsman. He went back to my father and told him to redo the blueprint. He did not congratulate him for having seen the error in the first place. Rather, he treated my father as if by bringing the error to the supervisor’s attention, he had in some way been disrespectful, and even insubordinate. My father was fired a short time afterward.
Being Right or Wrong in Computing
What these examples seem to be saying—no, what they were clearly saying—is that in certain circumstances irrationality is prized over rationality.
But why? What benefit is there in being wrong most of the time and right only occasionally, against being right most of the time and wrong only occasionally? More pertinently, does this same puzzling phenomenon infect science in general and computing in particular?
My guess is that it does. After all, we are all only human. Becoming angry over being caught in an error rather than taking joy in knowing the error will not be repeated seems to be one of humankind’s most fundamental psychological traits.
I asked Peter Denning how he handles this phenomenon with his computer science students. He said:
“To counteract the phenomenon, it would seem that a good starting point is to avoid certainty. When making a new claim, be open to the possibility that it is false. And then if someone has a plausible challenge to the claim, greet it warmly: ‘How interesting. Let’s have a conversation about this. I’d really like to find out more.’ In this way, you present yourself as a curious person open to conversations rather than a hardline person who won’t discuss apparent errors.
“The love of certainty is a mood. To address it, I talk to my students about moods, which are background dispositions that color how we see the world. I see three associated with how we deal with new events: confusion, perplexity, and wonder.
- Confusion says, “I don’t know what’s going on here, and I don’t like it.”
- Perplexity says, “I don’t know what’s going on here.”
- Wonder says, “I don’t know what’s going on here, and I like it!”
“I ask them which mood they think is most conducive to learning and discovery. Everyone says wonder. Nevertheless, later when confronted with an unwelcome surprise, they fall into confusion. Automatic negative moods are debilitating and require re-training to cleanse them from your system.”
Or as it was said so well by William Shakespeare, as most things so frequently are: