Tags

, , , , , ,

IMG_2185

The non-sound of non-music.

What follows is the first of a series of blogs that discusses, in turn, the scenarios in “Turing’s Nightmares” (https://www.amazon.com/author/truthtable).

One of the deep dilemmas in the human condition is this. In order to function in a complex society, people become “expert” in particular areas. Ideally, the areas we chose are consistent with our passions and with our innate talents. This results in a wonderful world! We have people who are expert in cooking, music, art, farming, and designing clothes. Some chose journalism, mathematics, medicine, sports, or finance as their fields. Expertise often becomes yet more precise. People are not just “scientists” but computer scientists, biologists, or chemists. The computer scientists may specialize still further into chip design, software tools, or artificial intelligence. All of this specialization not only makes the world more interesting; it makes it possible to support billions of people on the planet. But here is the rub. As we become more and more specialized, it becomes more difficult for us to communicate and appreciate each other. We tend to accept the concerns and values of our field and sub-sub speciality as the “best” or “most important” ones.

To me, this is evident in the largely unstated and unchallenged assumption that a super-intelligent machine would necessarily have the slightest interest in building a “still more intelligent machine.” Such a machine might be so inclined. But it also might be inclined to chose some other human pursuit, or still more likely, to pursue something that is of no interest whatever to any human being.

Of course, one could theoretically insure that a “super-intelligent” system is pre-programmed with an immutable value system that guarantees that it will pursue as its top priority building a still more intelligent system. However, to do so would inherently limit the ability of the machine to be “super-intelligent.” We would be assuming that we already know the answer to what is most valuable and hamstring the system from discovering anything more valuable or more important. To me, this makes as much sense as an all-powerful God allowing a species of whale to evolve —- but predefining that it’s most urgent desire is to fly.

An interesting example of values can be seen in the Figures Analogy dissertation of T.G. Evans (1968). Evans, a student of Marvin Minsky, developed a program to solve multiple choice figures analogies of the form A:B::C:D1,D2,D3,D4, or D5. The program essentially tried to “discover” transformations and relationships between A and B that could also account for relationships between C and the various D possibilities. And, indeed, it could find such relationships. In fact, every answer is “correct.” That is to say, the program was so powerful that it could “rationalize” any of the answers as being correct. According to Evans’s account, fully half of the work of the dissertation was discovering and then inculcating his program with the implicit values of the test takers so that it chose the same “correct” answers as the people who published the test. (This is discussed in more detail in the Pattern “Education and Values” I contributed to Liberating Voices: A Pattern Language for Communication Revolution (2008), Douglas Schuler, MIT Press.) For example, suppose that A is a capital “T” figure and B is an upside down “T” figure. C is an “F” figure. Among the possible answers are “F” figures in various orientations. To go from a “T” to an upside down “T” you can rotate the “T” in the plane of the paper 180 degrees. But you can also get there by “flipping” the “T” outward from the plane. Or, you could “translate” the top bar of the “T” from the top to the bottom of the vertical bar. It turns out that the people who published the test preferred you to rotate the “T” in the plane of the paper. But why is this “correct”? In “real life” of course, there is generally much more context to help you determine what is most feasible. Often, there will be costs or side-effects of various transformations that will help determine which is the “best” answer. But in standardized tests, all that context is stripped away.

Here is another example of values. If you ever take the Wechsler “Intelligence” test, one series of questions will ask you how two things are alike. For instance, they might ask, “How are an apple and a peach alike?” You are “supposed to” answer that they are both fruit. True enough. This gives you two points. If you give a functional answer such as “You can eat them both” you only get one point. If you give an attributional answer such as “They are both round” you get zero points. Why? Is this a wrong answer? Certainly not! The test takers are measuring the degree to which you have internalized a particular hierarchical classification system. Of course, there are many tasks and context in which this classification system is useful. But in some tasks and contexts, seeing that they are both round or that they both grow on trees or that they are both subject to pests is the most important thing to note.

We might consider and define intelligence to be the ability to solve problems. A problem can be seen as wanting to be in a state that you are not currently in. But what if you have no desire to be in the “desired” state? Then, for you, it is not a problem. A child is given a homework assignment asking them to find the square root of 2 to four decimal points. If the child truly does not care, it may become a problem, not for the child, but for the parent. “How can I make my child do this?” They may threaten or cajole or reward the child until the child wants to write out the answer. So, the child may say, “Okay. I can do this. Leave me alone.” Then, after the parent leaves, they text their friend on the phone and then copy the answer onto their paper. The child has now solved their problem.

Would a super-intelligent machine necessarily want to build a still more intelligent machine? Maybe it would want to paint, make music, or add numbers all day. And, if it did decide to make music, would that music be designed for us or for its own enjoyment?

Indeed, a large part of the values equation is “for whose benefit?” Typically, in our society, when someone pays for a system, they get to determine for whose benefit the system is designed. But even that is complex. You might say that cigarettes are “designed” for the “benefit” of the smoker. But in reality, while they satisfy a short-term desire of the smoker, they are designed for the benefit of the tobacco company executives. They set up a system so that smokers paid for research into how to make cigarettes even more addictive and for advertising to make them appeal to young children. There are many such systems that have been set up. If AI systems continue to be more ubiquitous and complex, the values inherent in such systems and who is to benefit will become more and more difficult to trace.

Values are inextricably bound up with what constitutes a “problem” and what constitutes a “solution.” This is no trivial matter. Hitler considered the annihilation of Jews the “ultimate solution.” Some people in today’s society think that the “solution” to the “drug problem” is a “war on drugs” which has certainly destroyed orders of magnitude more lives than drugs have. (Major sponsors for the “Partnership for a Drug Free America” have been drug companies). Some people consider the “solution” to the problem of crime to be stricter enforcement and harsher penalties and building more prisons. Other people think that a more equitable society with more opportunities for jobs and education will do far more to mitigate crime. Which is a more “intelligent” solution? Values will be a critical part of any AI system. Generally, the inculcation of values is an implicit process. But if AI systems will begin making what are essentially autonomous decisions that affect all of us, we need to have a very open and very explicit discussion of the values inherent in such systems now.

Turing’s Nightmares

Advertisements