, , , , , ,



In Chapter Eleven of Turing’s Nightmares, a family is attempting to escape from impending doom via a driverless car. The car operates by a set of complex rules, each of which seems quite reasonable in and of itself and under most circumstances. The net result however, is probably not quite what the designers envisioned. The underlying issue is not so much a problem with driverless cars, robotics or artificial intelligence. The underlying issue has more to do with the very tricky issue of separating problem from context. In designing any complex system, regardless of what technology is involved, people generally begin by taking some conditions as “given” and others as “things to be changed.” The complex problem is then separated into sub-problems. If each of the subproblems is well-solved, the implicit theory is that the overall problem will be solved as well. The tricky part is separating what we consider “problem” from “context” and separating the overall problem into relatively independent sub-problems.

Dave Snowden tells an interesting story from his days consulting for the National Water Service in the UK. The Water Service included in its employ engineers to fix problems and dispatchers who answered phones and dispatched engineers to fix those problems. Engineers were incented to solve problems while dispatchers were measured by how many calls they handled in a day. Most of the dispatchers were young but one of the older dispatchers was considerably slower than most. She only handled about half the number of calls she was “supposed to.” She was nearly fired. As it turned out, her husband was an engineer in the Water Service. She knew a lot and her phone calls ended up resulting in an engineer being dispatched about 1/1000 of the time while the “fast” dispatchers sent engineers to fix problems about 1/10 of the time. What was happening? Because the older employee knew a lot about the typical problems, she was actually solving many of them on the phone. She was saving her company a lot of money and was almost fired for it. Think about that. She was saving her company a lot of money and was almost fired for it.

In my dissertation, I compared the behavior of people solving a river-crossing problem to the behavior of the “General Problem Solver” — an early AI program developed by Shaw, Newell and Simon at Carnegie-Mellon University. One of the many differences was that people behave “opportunistically” compared with the General Problem Solver of the time. Although the original authors of GPS felt that its recursive nature was a feature, Quinlan and Hunt showed that there was a class of problems on which their non-recursive system (Fortran Deductive System) was superior.

Imagine, for example, that you wanted to read a new book (e.g., Turing’s Nightmare). In order to read the book, you will need to have the book so your sub-goal becomes to purchase the book; that is your goal. In order to meet that goal, you realize you will need to get $50 in cash. Now, getting $50 in cash becomes your goal. You decide that to meet that goal, you could volunteer to shovel the snow from your uncle’s driveway. On the way out the door, you mention your entire goal structure to your roommate because you need to borrow their car to drive to your uncle’s house. They say that they have already purchased the book and you are welcome to borrow it. The original GPS, at this point, would have solved the book reading problem by solving the book purchasing problem by solving the getting cash problem by going to your uncle’s house by borrowing your roommate’s car! You, on the other hand, like most individual human beings, would simply borrow your roommate’s copy and curl up in a nice warm easy chair to read the book. However, when people develop bureaucracies, whether business, academic, or governmental, these bureaucracies may well have spawned different departments, each with its own measures and goals. Such bureaucracies might well end up going through the whole chain in order to “solve the problem.”

Similarly, when groups of people design complex systems, the various parts of the system are generally designed and built by different groups of people. If these people are co-located, and if there is a high degree of trust, and if people are not micro-managed, and if there is time, space, and incentive for people to communicate even when it is not directly in the service of their own deadlines, the design group will tend to “do the right thing” and operate intelligently. To the extent, however, that companies have “cut the fat” and discourage “time-wasting” activities like socializing with co-workers and “saving money” by outsourcing huge chunks of the designing and building process, you will be lucky if the net result is as “intelligent” as the original General Problem Solving system.

Most readers will have experienced exactly this kind of bureaucratic nonsense when encountering a “good employee” who has no power or incentive to do anything but follow a set of rules that they have been warned to follow regardless of the actual result for the customer. At bottom then, the root cause of problems illustrated in chapter ten is not “Artificial Intelligence” or “Robotics” or “Driverless Cars.” The root issue is what might be called “Deep Greed.” The people at the very top of companies squeeze every “spare drop” of productivity from workers thus making choices that are globally intelligent nearly impossible due to a lack of knowledge and lack of incentive. This is combined with what might be called “Deep Hubris” — the idea that all contingencies have been accounted for and that there is no need for feedback, adaptation, or work-arounds.

Here is a simple example that I personally ran into, but readers will surely have many of their own examples. I was filling out an on-line form that asked me to list the universities and colleges I attended. Fair enough, but instead of having me type in the institutions, they designers used a pull-down list! There are somewhere between 4000 and 7500 post high-school institutions in the USA and around 50,000 world wide. The mere fact that the exact number is so hard to pin down should give designers pause. Naturally, for most UIs and most computer users, it is much faster to type in the name than scroll to it. Of course, the list keeps changing too. Moreover, there is ambiguity as to where an item should appear in the alphabetical list. For example, my institution, The University of Michigan, could conceivably be listed as “U of M”, “University of Michigan”, “Michigan”, “The University of Michigan”, or “U of Michigan.” As it turns out, it isn’t listed at all. That’s right. Over 43,000 students were enrolled last year at Michigan and it isn’t even on the list at least so far as I could determine in any way. That might not be so bad, but the form does not allow the user to type in anything. In other words, despite the fact that the category “colleges and universities” is ever-changing, a bit fuzzy, and suffers from naming ambiguity, the designers were so confident of their list being perfect that they saw no need for allowing users to communicate in any way that there was an error in the design. If one tries to communicate “out of band”, one is led to a FAQ page and ultimately a form to fill out. The form presumes that all errors are due to user errors and that all of these user errors are again from a small class of pre-defined errors! That’s right! You guessed it! The “report a problem” form again presumes that every problem that exists in the real world has already been anticipated by the designers. Sigh.

So, to me, the idea that Frank and Katie and Roger would end up as they did does not seem the least bit far-fetched. As I mentioned, the problem is not with “artificial intelligence.” The problem is not even that our society is structured as a hierarchy of greed. In the hierarchy of greed, everyone keeps their place because they are motivated to get just a little more by following the rules they are given from above and keeping everyone below them in line following their rules. It is not a system of involuntary servitude (for most) but a system of voluntary servitude. It seems to the people at each level that they can “do better” in terms of financial rewards or power or prestige by sifting just a little more from those below. To me, this can be likened to the game of Jenga™. In this game, there is a high stack of rectangular blocks. Players take turns removing blocks. At some point, of course, what is left of the tower collapses and one player loses. However, if our society collapses from deep greed combined with deep hubris, everyone loses.

Newell, A.; Shaw, J.C.; Simon, H.A. (1959). Report on a general problem-solving program. Proceedings of the International Conference on Information Processing. pp. 256–264.

J.R. Quinlan & E.B. Hunt (1968). A Formal Deductive Problem-Solving System, Journal of the ACM 10/1968; 15(4):625-646. DOI: 10.1145/321479.321487

Thomas, J.C. (1974). An analysis of behavior in the hobbits-orcs problem. Cognitive Psychology 6 , pp. 257-269.

Turing’s Nightmares