Old Enough to Know Less?
There are many themes in Chapter 18 of Turing’s Nightmares. Let us begin with a major theme that is actually meant as practical advice for building artificial intelligence. I believe that an AI system that interacts well with human beings will need to move around in physical space and social space. Whether or not such a system will end up actually experiencing human emotions is probably unknowable. I suspect it will only be able to understand, simulate, and manipulate such emotions. I believe that the substance of which something is made typically has deep implications for what it is. In this case, the fact that we human beings are based on a billion years of evolution and are made of living cells has implications about how we experience the world. However, here we are addressing a much less philosophical and more practical issue. Moving around and interacting facilitates learning.
I first discussed this in an appendix to my dissertation. In that, I compared human behavior in a problem solving task to the behavior of an early and influential AI system modestly titled, “The General Problem Solver.” In studying problem solving, I came across two interesting findings that seemed somewhat contradictory. On the one hand, Grand Master chess players had outstanding memory for “real” chess positions (i.e., ones taken from real high level games). On the other hand, think-aloud studies of Grand Masters showed that they re-examined positions that they had already been to earlier in their thinking. My hypothesis was that Grand Masters examined one part of a game tree; examined another part of the game tree and in so doing, updated their general evaluation functions with a slightly altered copy that learned from the exploration so that their evaluation function for this particular position was tuned to this particular position.
Our movements though space, in particular, provide us with a huge number of examples from which to learn about vision, sound, touch, kinesthetics, smell and their relationships. What we see, for instance, when we walk, is not a random sequence of images (unlike TV commercials!), but ones that have very particular and useful properties. As we approach objects, we most typically get more and more detailed images of those objects. This allows a constant tuning process for our being able to recognize things at a distance and with minimal cues.
An analogous case could be made for getting to know people. We make inferences and assumptions about people initially based on very little information. Over time, if we get to know them better, we have the opportunity to find out more about them. This potentially allows us (or a really smart robot) to learn to “read” people better over time. But it does not always work out that way. Because of the ambiguities of interpreting human actions and motives as well as the longer time delays, learning more about people is not guaranteed as it is with visual stimuli. If a person begins interacting with people who are predefined to be in a “bad” category, experience with that person may be looked at through such a heavy filter that people never change their minds despite what an outside observer might perceive as overwhelming evidence. If a man believes all people who wear hats are “stupid” and “prone to violence” he may dismiss a smart, peaceful person who wears a hat as “the exception that proves the rule” or say, “Well, he doesn’t always wear hats” or “The hats he wears are made by non-hat wearers and that makes him seem peaceful and intelligent.” The continued misperceptions, over-generalizations, and prejudices partly continue because they also form a framework for rationalizing greed and unfairness. It’s “okay” to steal from people who wear hats because, after all, they are basically stupid and prone to violence.
Unfortunately, when it comes to the potential for humans to learn about each other, there are a few people who actually prey on and amplify the unenlightened aspects of human nature because they themselves gain power, wealth, and popularity by doing so. They say, in effect, “All the problems you are experiencing — they are not your fault! They are because of the people with hats!” It’s a ridiculous presumption, but it often works. Would intelligent robots be prone to the same kinds of manipulations? Perhaps. It probably depends, not on a wheelbarrow filled with rainwater, but on how it is initially programmed. I suspect that an “intelligent agent” or “personal assistant” would be better off if it could take a balanced view of its experience rather than one top-down directed by pre-programmed prejudice. In this regard, creators of AI systems (as well as everyone else) would do well to employ the “Iroquois Rule of SIx.” What this rule claims (taken from the work of Paula Underwood) is that when you observe a person’s actions, it is normal to immediately form a hypothesis about why they are doing what they do. Before you act, however, you should typically generate five additional hypotheses about why they do as they do. Try to gather evidence about these hypotheses.
If prejudice and bigotry are allowed to flourish as an “acceptable political position” it can lead to the erosion of peace, prosperity and democracy. This is especially dangerous in a country as diverse as the USA. Once negative emotions about others are accepted as fine and dandy, prejudice and bigotry can become institutionalized. For example, in the Jim Crow South, not only were many if not most individual “Whites” themselves prejudiced; it became illegal even for those unprejudiced whites to sit at the same counters, use the same restrooms, etc. People could literally be thrown in jail simply for being rational. In Nazi Germany, not only were Jews subject to genocide; German non-Jewish citizens could be prosecuted for aiding them; in other words, for doing something human and humane. Once such a system became law with an insane dictator at the helm, millions of lives were lost in “fixing” this. Of course, even having the Allies win World War II did not bring back the six million Jews who were killed. The Germans were very close to developing the atomic bomb before the USA. Had they developed such a bomb in time, with an egomaniacal dictator at the helm, would they have used it to impose such hate of Jews, Gypsies, Homosexuals, people who were differently abled on everyone? Of course they would have. And then, what would have happened once all the “misfits” were eliminated? You guessed it. Another group would have been targeted. Because getting rid of all the misfits would not bring the promised peace and prosperity. It never has. It never will. By its very nature, it never could.
Artificial Intelligence is already a useful tool. It could continue to evolve in even more useful and powerful directions. But, how does that potential for a powerful amplifier of human desire play out if it falls into the hands of a nation with atomic weapons? How does that play out if that nation is headed up by an egomaniac who plays on the very worst of human nature in order to consolidate power and wealth? Will robots be programmed to be “open-minded” and learn for themselves who should be corrected, punished, imprisoned, eliminated? Or will they become tools to eliminate ever-larger groups of the “other” until no-one is left but the man on the hill, the man in the high castle? Is this the way we want the trajectory of primate evolution to end? Or do we find within ourselves, each of us, that more enlightened seed to plant. Could AI instead help us finally overcome prejudice and bigotry by letting us understand more fully the beauty of the spectrum of what it means to be human?
More about Turing’s Nightmares can be found here.Author Page on Amazon