Pros and Cons of Artificial Intelligence

Tags

, , , , , , ,

IMG_6925

The Pros and Cons of AI Part Three: Artificial Intelligence

We have already shown in the two previous blogs why it more effective and efficient to replace eating with Artificial Ingestion and to replace sex with Artificial Insemination. In this, the third and final part, we will discuss why human intelligence should be replaced with Artificial Intelligence. The arguments, as we shall see, are mainly simple extrapolations from replacing eating and sex with their more effective and efficient counterparts.

Human “intelligence” is unpredictable. In fact, all forms of human behavior are unpredictable in detail. It is true that we can often predict statistically what people will do in general. But even those predictions often fail. It is hard to predict whether and when the stock market will go up or down or which movies will be blockbuster hits. By contrast, computers, as well know, never fail. They are completely reliable and never make mistakes. The only exceptions to this general rule are those rare cases where hardware fails, software fails, or the computer system was not actually designed to solve the problems that people actually had. Putting aside these extremely rare cases, other errors are caused by people. People may cause errors because they failed to read the manual (which doesn’t actually exist because to save costs, vendors now expect that users should look up the answers to their problems on the web) or because they were confused by the interface. In addition, some “errors” occur because hackers intentionally make computer systems operate in a way that they were not intended to operate. Again, this means human error was the culprit. In fact, one can argue that hardware errors and software errors were also caused by errors in production or design. If these errors see the light of day, then there were also testing errors. And if the project ends up solving problems that are different from the real problems, then that too is a human mistake in leadership and management. Thus, as we can see, replacing unpredictable human intelligence with predictable artificial intelligence is the way to go.

Human intelligence is slow. Let’s face it. To take a representative activity of intelligence, it takes people seconds to minutes to do simple square roots of 16 digit numbers while computers can do this much more quickly. It takes even a good artist at least seconds and probably minutes to draw a good representation of a birch tree. But google can pull up an excellent image in less than a second. Some of these will not actually be pictures of birch trees, but many of them will.

Human intelligence is biased. Because of their background, training and experience, people end up with various biases that influence their thinking. This never happens with computers unless they have been programmed to do something useful in which case, some values will have to be either programmed into it or learned through background, training and experience.

Human intelligence in its application most generally has a conscious and experiential component. When a human being is using their intelligence, they are aware of themselves, the situation, the problem and the process, at least to some extent. So, for example, the human chess player is not simply playing chess; they are quite possibly enjoying it as well. Similarly, human writers enjoy writing; human actors enjoy acting; human directors enjoy directing; human movie goers enjoy the experience of thinking about what is going on in the movie and feeling, to a large degree, what people on the screen are attempting to portray. This entire process is largely inefficient and ineffective. If humans insist on feeling things, that could all be accomplished much more quickly with electrodes.

Perhaps worst of all, human intelligence is often flawed by trying to be helpful. This is becoming less and less true, particularly in large cities and large bureaucracies. But here and there, even in these situations that should be models of blind rule-following, you occasionally find people who are genuinely helpful. The situation is even worse in small towns and farming communities where people are routinely helpful, at least to the locals. It is only when a user finds themselves interacting with a personal assistant or audio menu system with no possibility of a pass-through to a human being that they can rest assured that they will not be distracted by someone actually trying to understand and help solve their problem.

Of course, people in many professions, whether they are drivers, engineers, scientists, advertising teams, lawyers, farmers, police officers etc. will claim that they “enjoy” their jobs or at least certain aspects of them. But what difference does that make? If a robot or AI system can do 85 to 90% of the job in a fast, cheap way, why pay for a human being to do the service? Now, some would argue that a few people will be left to do the 10-15% of cases not foreseen ahead of time in enough detail to program (or not seen in the training data). But why? What is typically done, even now, is to just the let user suffer when those cases come up. It’s too cumbersome to bother with back-up systems to deal with the other cases. So long as the metrics for success are properly designed, these issues will never see the light of day. The trick is to make absolutely sure than the user has no alternative means of recourse to bring up the fact that their transaction failed. Generally, as the recent case with Yahoo shows, even if the CEO becomes aware of a huge issue, there is no need to bring it to public attention.

All things considered, it seems that “Artificial Intelligence” has a huge advantage over “Natural Intelligence.” AI can simply be defined to be 100% successful. It can save money and than money can be appropriately partitioned to top company management, shareholders, workers, and consumers. A good general formula to use in such cases is the 90-10 rule; that is, 90% of the increased profits should go to the top management and 10% should go to the shareholders.

As against increased profits, one could argue that people get enjoyment out of the thinking that they do. There is some truth to that, but so what? If people enjoy playing doctor, lawyer, and truck driver, they can still do that, but at their own expense. Why should people pay for them to do that when an AI system can do 85% of the job at nearly zero costs? Instead of worrying about that, we should turn our attention to a more profound problem: what will top management do with that extra income?

Author Page on Amazon

Turing’s Nightmares

 

 

Pros and Cons of Artificial Insemination

Tags

, , , , , ,

img_8526

 

The Pros and Cons of AI: Part Two (Artificial Insemination).

Animal husbandry and humane human medical practice offer up many situations where artificial insemination is a useful and efficient technique. It is often used in horse breeding, for example, to avoid the risk of injury that more natural breeding might engender. There are similarly many cases where a couple wants to get pregnant and the “ordinary” way will not work. This could be due to physical problems with the man, the woman, or both. In some cases, it will even be necessary to use sperm from someone who is not going to be the legal father. Generally, the couple will decide it is more acceptable emotionally if the sperm donor is anonymous and the insemination is not done via intercourse.

But what about all those cases where the couple tries and indeed, succeeds, the “old-fashioned way.” An argument could certainly be made that all intercourse should be replaced with AI (artificial insemination).

First, the old-fashioned way often produces emotional bonding between the partners. (Some even call it “making love.”) No-one has ever provided a convincing quantitative economic analysis of why this is beneficial. It is certainly painful when pair-bonded individuals are split apart by divorce or death. AI would not prevent all pair bonding, but it could help reduce the risk of such bonds being formed.

Second, the old-fashioned way risks the transmission of sexually transmitted diseases. Even when pairs are not trying to get pregnant and even when they have the intention of using forms of “protection”, sometimes passion overtakes reason and people, in the heat of the moment, “forget” to use protection. AI provides an opportunity for screening and for greatly reducing the risk of STDs being spread.

Third, the combinations of genes produced by sexual intercourse are random and uncontrolled. While it is currently beyond the state of the art, one can easily imagine that sometime in this century it will possible to “screen” sperm cells and only chose the “best” for AI.

Fourth, traditional sex if often quite expensive in terms of economic costs. Couples will often spend hours engaging in procreational activities than need only take minutes. Beyond that, traditional sex if often accompanied by special dinners, walks on the beach, playing romantic music, and often couples continue to stay together in essentially unproductive activities even after sex such as cuddling and talking.

There are probably additional reasons why AI makes a lot of sense economically and why it is a lot better than the old-fashioned alternative.

Of course, one could take the tack of considering life as something valuable for the experiences themselves and not merely as a means to an end of higher productivity. This seems a dangerously counter-cultural stand to take in modern American society, but in the interest of completeness, and mainly just to prove its absurdity, let us consider for a moment that sex may have some intrinsic and experiential value to the participants.

Suppose that lovers take pleasure in the sights, sounds, smells, feels, and tastes associated with their partners. Imagine that the sexual acts they engage in provide pleasure in and of themselves. There seems to be a great deal of uncertainty about the monetary value of these experiences since the prices charged for artificial versions of these experiences can easily vary by a factor of ten or more. In fact, there have been reports that some people will only engage in sex that is not paid for directly.

So, on the one hand, we have the provable efficiency and effectiveness of AI. On the other hand, we have human experiences whose value is problematic to quantify. The choice seems obvious. Sometime in this century, no doubt, all insemination will be done artificially so that everyone (or at least some very rich people)  can enjoy the great economic benefits that will come about from the increased efficiency and effectiveness of AI as compared with “natural” sex.

As further proof, if it is needed, imagine two island countries alike in every way in terms of climate, natural beauty, current economic opportunity, literacy and so on. In fact, the only way these two islands differ is that on one island (which we shall call AII for Artificial Insemination Isle) all “sex” is limited to AI whilst on the other island (which we shall call NII for Natural Insemination Isle) sex is natural and people can spend as much or as little time as they like doing it. Now, people are given a choice about which island to live on. Certainly, with its greater prospects of economic growth and efficiency, everyone would choose to live on AII while NII would be virtually empty. Readers will recognize that this is essentially the same argument as to why “Artificial Ingestion” should surely replace “Natural Ingestion” — cheaper, faster, more reliable. If readers see any holes in this argument, I’d surely like to be informed of them.

Turing’s Nightmares

Author Page on Amazon

The Pros and Cons of AI: Part One

Tags

, , , , , , ,

IMG_5478

This is the first of three connected blog posts on the appropriate uses and misuses of AI. In this blog post, I’ll look at “Artificial Ingestion.” (Trust me, it will tie back to another AI, Artificial Intelligence).

While ingestion, and therefore “Artificial Ingestion” is a complex topic, I begin with ingestion because it is a bit more divorced from thought itself. It is easier to think of digestion as separate from thinking; that is, to objectify it more than artificial intelligence because in writing about intelligence, it is necessary to use intelligence itself.

Do we eat to live or live to eat? There is little doubt that eating is necessary to the life of animals such as human beings. Our distant ancestors could have taken a greener and more photosynthetic path but instead, we have collectively decided to kill other organisms to garner our energy. Eating has a utilitarian purpose; indeed, it is a vital purpose. Without food, we eventually die. Moreover, the quality and quantity of the food we eat has a profound impact on our health and well-being. Many of us live in a paradoxical time when it comes to food. Our ancestors often struggled mightily to obtain enough food. Our brains are thus genetically “wired” to search for high sugar, high fat, high salt foods. Even though many of us “know” that we ingest too many calories and may have read and believe that too much salt and sugar are bad for us, it is difficult to overcome the “programming” of countless generations. We are also attracted to brightly colored food. In our past, these colors often signaled foods that were especially high in healthful phytochemicals.

Of course, in modern societies of the “Global North” our genetic predispositions toward high sugar, high fat, high salt, highly colored foods are manipulated by greedy corporate interests. Foods like crackers and chips that contain almost nothing of real value to the human diet are packaged to look like real foods. Beyond that, billions of dollars of advertising dollars are spent to convince us that if we buy and ingest these foods it will help us achieve other goals. For example, we are led to believe that a mother who gives her children “food” consisting of little other than sugar and food dye will be loved by her children and they will be excited and happy children. Children themselves are led to believe that ingesting such junk food will lead them to magical kingdoms. Adult males are led to believe that providing the right kinds of high fat, high salt chips will result in male bonding experiences. Adult males are also led to believe that the proper kinds of alcoholic beverages will result in the seduction of highly desirable looking mates.

Over time, the natural act of eating has been enhanced with rituals. Human societies came to hunt and gather (and later farm) cooperatively. In this way, much more food could be provided over a more continuous basis. Rather than fight each other over food, we sit down in a “civilized” manner and enjoy food together. Some people, through a combination of natural talent and training become experts in the preparation of foods. We have developed instruments such as chopsticks, spoons, knives and forks to help us eat foods. Most typically, various cultures have rituals and customs surrounding food. In many cases, these seem to be geared toward removing us psychologically from the life-giving functionality of food toward the communal enjoyment of food. For example, in my culture, we wait to eat until everyone is served. We eat at a “reasonable” pace rather than gobbling everything down as quickly as possible (before others at the table can snatch our portion). If there are ten people at the table and eleven delicious deserts, people turn many social summersaults in order to avoid taking the last one.

For much of our history, food was confined to what was available in the local region and season. Now, many people, but by no means all, are well off enough to buy foods at any season that originally were grown all over the world. When I was a child, very few Americans had even tried sushi, for example, and the very idea of eating raw fish turned stomachs. At this point, however, many Americans have tried it and most who have enjoy it. Similarly, other cuisines such as Indian and Middle Eastern have spread throughout the world in ways that would have been impossible without modern transportation, refrigeration, and modern training with cookbooks, translations, and videos supplementing face to face apprenticeships.

Some of these trends have enabled some people to enjoy foods of high quality and variety. We support many more people on the planet than would have been possible through hunting and gathering. These “advances” are not without costs. First, there are more people starving in today’s world than even existed on the planet 250,000 years ago. So, these benefits are very unevenly distributed. Second, while fine and delicious foods are available to many, the typical diet of many is primarily based on highly processed grains, soybeans, fat, refined sugar, salt and additives. These “foods” contain calories that allow life to continue; however, they lack many naturally occurring substances that help provide for optimal health. As mentioned, these foods are made “palatable” in the cheapest possible way and then advertised to death to help fool people into thinking they are eating well. In many cases, even “fresh” foods are genetically modified through breeding or via genetic engineering to provide foods that are optimized for cheap production and distribution rather than taste. Anyone who has grown their own tomatoes, for example, can readily appreciate that home grown “heirloom” tomatoes are far tastier than what is available in many supermarkets. While home farmers and small farmers have little in the way of government support, at least in the USA, mega-farming corporations are given huge subsidies to provide vast quantities of poor quality calories. As a consequence, low income people can generally not even afford good quality fresh fruits and vegetables and instead are forced through artificially cheap prices to feed their families with brightly packaged but essentially empty calories.

While some people enjoy some of the best food that ever existed, others have very mediocre food and still others have little food of any kind. What comes next? On the one hand, there is a move toward ever more efficient means of production and distribution of food. The food of humans has always been of interest to a large variety of other animals including rats, mice, deer, rabbits, birds, and insects. Insect pests are particularly difficult to deal with. In response, and in order to keep more of the food for “ourselves”, we have largely decided it is worth the tradeoff to poison our food supply. We use poisons that are designed to kill off insect pests but not kill us off, at least not immediately. I grow a little of my own food and some of that food gets eaten by insects, rabbits, and birds. Personally, I cannot see putting poison on my food supply in order to keep pests from having a share. However, I am lucky. I do not require 100% of my crop in order to stay alive nor to pay off the bank loan by selling it all. Because I grow a wide variety of foods in a relatively small space, there is a lively ecosystem and I don’t typically get everything destroyed by pests. Farmers who grow huge fields of corn, however, can be in a completely different situation and a lot of a crop can fall prey to pests. If they have used pesticides in the past, this is particularly true because they have probably poisoned the natural predators of those pests. At the same time, the pests themselves continue to evolve to be resistant to the poisons. In this way, chemical companies perpetuate a vicious circle in which more and more poison is needed to keep the crops viable. Luckily for the chemical companies, the long-term impact of these poisons on the humans who consume them is difficult to prove in courts of law.

There are movements such as “slow food” and eating locally grown food and urban gardens which are counter-trends, but by and large, our society of specialization has moved to more “efficient” production and distribution of food. More people eat out a higher percentage of the time and much of that “eating out” is at “fast food” restaurants. People grab a sandwich or a bagel or a burger and fries for a “quick fix” for their hunger in order to “save time” for “more productive” pursuits. Some of these “more productive” pursuits include being a doctor to cure diseases that come about in part from people eating junky food and spending most of their waking hours commuting, working at a desk or watching TV. Other “more productive” pursuits include being a lawyer and suing doctors and chemical companies for diseases. Yet other “more productive pursuits” include making money by pushing around little pieces of other people’s money. Still other “more productive pursuits” include making and distributing drugs to help people cope with lives where they spend all their time in “more productive pursuits.”

Do we live to eat or eat to live? Well, it is a little of both. But we seem to have painted ourselves into a corner where most people most of the time have forgone the pleasure of eating that is possible in order to eat more “efficiently” so that we can spend more time making more money. We do this in order to…? What is the end game here?

One can imagine a society in which eating itself becomes a completely irrelevant activity for the vast majority of people. Food that requires chewing takes more time so let’s replace chewing with artificial chewing. Using a blender allows food with texture to be quickly turned to a liquid that can be ingested in the minimum necessary time. One extreme science fiction scenario was depicted in the movie “Soylent Green” which, as it turns out, is made from the bodies of people killed to make room for more people. The movie is set in 2022 (not that far away) and was released in 1973. Today, in 2016, there exists a food called “soylent” (https://en.wikipedia.org/wiki/Soylent_(food)) whose inventor, Rob Rhinehart took the name from the movie. It is not made from human remains but the purpose is to provide an “efficient” solution to the Omnivore’s Dilemma (Michael Pollan). More efficient than smoothies, shakes, and soylent are feeding tubes.

Of course, there are medical conditions where feeding tubes are necessary as a replacement or supplement to ordinary eating as is being “fed” via an IV. But is this really where humanity in general needs to be headed? Is eating to be replaced with “Artificial Ingestion” because it is more efficient? We wouldn’t have to “waste our time” and “waste our energy” shopping, choosing, preparing, chewing, etc. if we could simply have all our nutritional needs met via an IV or feeding tube. With enough people opting in to this option, I am sure industrial research could provide ever less invasive and more mobile forms of IV and tube feeding. At last, humanity could be freed from the onerous task of ingestion, all of which could be replaced by “Artificial Ingestion.” The dollars saved could be put toward some more worthy purpose; for example, making a very few people very very rich.

There are, of course, a few problematic issues. For one thing, despite years of research, we are still discovering nutrients and their impacts. Any attempt to completely replace food with a uniform liquid supplement would almost certainly leave out some vital, but as yet undiscovered ingredients. But a more fundamental question is to what end would we undertake this endeavor in the first place? What if the purpose of life is not, after all, to accomplish everything “more efficiently” but rather, what if the purpose of life is to live it and enjoy it? What then?

Author’s Page on Amazon

Turing’s Nightmares

Rules and Standards nearly Dead? 

Tags

, , , , , , ,

funnysign

Ever get a speeding ticket that you thought was “silly”? I certainly have. On one occasion, when I was in graduate school in Ann Arbor, I drove by a police car parked in a gas station. It was a 35 mph zone. I looked over at the police car and looked down to check my speed. Thirty-five mph. No problem. Or, so I thought. I drove on and noticed that a few seconds later, the police officer turned his car on to the same road and began following me perhaps 1/4 to 1/2 mile behind me. He quickly zoomed up and turned on his flashing light to pull me over. He claimed he had followed me and I was going 50 mph. I was going 35. I kept checking because I saw the police car in my mirror. Now, it is quite possible that the police car was traveling 50, because he caught up with me very quickly. I explained this to no avail.

The University of Michigan at that time in the late 60’s was pretty liberal but was situated in a fairly conservative, some might say “redneck”, area of Michigan. There were many clashes between students and police. I am pretty certain that the only reason I got a ticket was that I was young and sporting a beard and therefore “must be” a liberal anti-war protester. I got the ticket because of bias.

Many years later, in 1988, I was driving north from New York to Boston on Interstate 84. This particular section of road is three lanes on both sides. It was a nice clear day and the pavement was dry as well as being dead straight with no hills. The shoulders and margins near the shoulders were clear. The speed limit was 55 mph but I was going 70. Given the state of my car, the conditions and the extremely sparse traffic, as well as my own mental and physical state, I felt perfectly safe driving 70. I got a ticket. In this case, I really was breaking the law. Technically. But I still felt it was a bit unjustified. There was no way that even a deer or rabbit, let alone a runaway child could come out of hiding and get to the highway without my seeing them in time to slow down, stop, or avoid them. Years earlier I had been on a similar stretch of road in Eastern Montana and at that time there was no speed limit. Still, rules are rules. At least for now.

“The Death of Rules and Standards” by Anthony J. Casey and Anthony Niblett suggests that advances in artificial intelligence may someday soon replace rules and standards with “micro-directives” tuned to the specifics of time and circumstance which will provide the benefits of rules without the cost of either. “…we suggest…a larger trend toward context specific laws that can adapt to any situation.” This is an interesting thesis and exploring it helps shine some light on what AI likely can and cannot do as well as making us question why we humans have categories and rules at all. Perhaps AI systems could replace human bias and general laws that seem to impose unnecessary restrictions in particular circumstances.

The first quibble with their argument is that no computer, however powerful, could possibly cover all situations. Taken literally, this would require a complete and accurate theory of physics as well as human behavior as well as a knowledge of the position and state of every particle in the universe. Not even post-singularity AI will likely be able to accomplish this. I hedge with the word “likely” because it is theoretically possible that a sufficiently smart AI will uncover some “hidden pattern” that shows that our universe which seems so vast and random can in fact be predicted in detail by a small set of laws that do not depend on details. In this fantasy future, there is no “true” randomness or chaos or butterfly effect.

Fantasies aside, the first issue that must be dealt with for micro-directives to be reasonable would be to have a good set of “equivalence classes” and/or to partition away differences that do not make a difference. The position of the moons of Jupiter shouldn’t make any difference as to whether a speeding ticket should be given or whether a killing is justified. Spatial proximity alone allows us as humans to greatly diminish the number of factors that need to be considered in deciding whether or not a give action is required, permissible, poor, or illegal. If I had gone to court about the speeding ticket on I-84, I might have mentioned the conditions of the roadway and its surroundings immediately ahead. I would not have mentioned anything whatever about the weather or road conditions anywhere else on the planet as being relevant to the safety of the situation. (Notice though, that it did seem reasonable to me, and possibly to you, to mention that very similar conditions many years earlier in Montana gave rise to no speed limit at all.) This gives us a hint that what is relevant or not relevant to a given situation is non-trivially determined. In fact, the “energy crisis” of the early 70’s gave rise to the National Maximum Speed Law as part of the 1974 Federal Emergency Highway Energy Conservation Act. This enacted, among other things, a federal law limiting the speed limit to 55 mph. A New York Times article by Robert A. Hamilton cites a study done of compliance on Connecticut Interstates in 1988 showing that 85% of the drivers violated the 55 mph speed limit!

So,not only would I not received a ticket in Montana in 1972 for driving under similar conditions;  I also would not have gotten a ticket on that same exact stretch of highway for going 70 in 1972 or in 1996. And, in the year I actually got that ticket, 85% of the drivers were also breaking the speed limit. The impetus for the 1974 law was that it was supposed to reduce demand for oil; however, advocates were quick to point out that it should also improve safety. Despite several studies on both of these factors, it is still unclear how much, if any, oil was actually saved and it is also unclear what the impact on safety was. It seems logical that slower speeds should save lives. However, people may go out of their way to get to an Interstate if they can drive much faster on it. So some traffic during the 55 limit would stay on less safe rural roads. In addition, falling asleep while driving is not recommended. Driving a long trip at 70 gets you off the road earlier and perhaps before dusk while driving at 55 will keep you on the road longer and possibly in the dark. In addition, lowering the speed limit, to the extent there is any compliance does not just impact driving; it could also impact productivity. Time spent on the road is (hopefully) not time working for most people. One reason it is difficult to measure empirically the impact of slower speeds on safety is that other things were happening as well. Cars have had a number of features to make them safer over time and seat belt usage has gone up as well. They have also become more fuel efficient. Computers, even very “smart” computers are not “magic.” They cannot completely differentiate cause and effect from naturally occurring data. For that, humans or computers have to do expensive, costly, and ethically problematic field experiments.

Of course, what is true about something as simple as enforcing speed limits is equally or more problematic in other areas where one might be tempted to utilize micro-directives in place of laws. Sticking to speeding laws, micro-directives could “adjust” to conditions and avoid biases based on gender, race, and age, but they could also take into account many more factors. Should the allowable speed, for instance, be based on income? (After all a person making $250K per year is losing more money by driving more slowly than one making $25K/year). How about the reaction time of the driver? How about whether or not they are listening to the radio? As I drive, I don’t like using cruise control. I change my speed continually depending on the amount of traffic, whether or not someone in the nearby area appears to be driving erratically, how much visibility I have, how closely someone is following me and how close I have to be to the car in front and so on. Should all of these be taken into account in deciding whether or not to give a ticket? Is it “fair” for someone with extremely good vision and reaction times to be allowed to drive faster than someone with moderate vision and slow reaction times? How would people react to any such personalized micro-directives?

While the speed ticket situation is complex and could be fraught with emotion, what about other cases such as abortion? Some people feel that abortion should never be legal under any circumstances and others feel it is always the woman’s choice. Many people, however, feel that it is only justified under certain circumstances. But what are those circumstances in detail? And, even if the AI system takes into account 1000 variables to reach a “wise” decision, how would the rules and decisions be communicated?

Would an AI system be able to communicate in such a way as to personalize the manner of presentation for the specific person in the specific circumstances to warn them that they are about to break a micro-directive? In order to be “fair”, one could argue that the system should be equally able to prevent everyone from breaking a micro-directive. But some people are more unpredictable than others. What if, in order to make it so person A is 98% likely to follow the micro-directive, the AI system presents a soundtrack of a screaming child but in order to make person B 98% likely to follow the micro-directive, it only whispers a warning. Now, person B ignores the micro-directive and speeds (which would happen according to the premise 2% of the time). Wouldn’t person B, now be likely to object that if they had had the same warning, they would have not ignored the micro-directive? Conversely, person A might be so disconcerted by the warning that they end up in an accident.

Anyway, there is certainly no argument that our current system of using human judgement is prone to various kinds of conscious and unconscious biases. In addition, it also seems to be the case that any system of general laws ends up punishing people for what is actually “reasonable” behavior under the circumstances and ends up letting people off Scott-free when they do despicable things which are technically legal (absurdly rich people and corporations paying zero taxes comes to mind). Will driverless cars be followed by judge-less and jury-less courts?

Turing’s Nightmares

Abracadabra!

Tags

, , , , , , , ,

IMG_7241.JPG

Abracadabra! Here’s the thing. There is no magic. Of course, there is the magic of love and the wonder at the universe and so there is metaphorical magic. But there is no physical magic and no mathematical magic. Why do we care? Because in most science fiction scenarios, when super-intelligence happens, whether it is artificial or humanoid, magic happens. Not only can the super-intelligent person or computer think more deeply and broadly, they also can start predicting the future, making objects move with their thoughts alone and so on. Unfortunately, it is not just in science fiction that one finds such impossibilities but also in the pitches of companies about biotech and the future of artificial intelligence. Now, don’t get me wrong. Of course, there are many awesome things in store for humanity in the coming millennia, most of which we cannot even anticipate. But the chances of “free unlimited energy” and a computer that will anticipate and meet our every need are slim indeed.

This exaggeration is not terribly surprising. I am sure much of what I do seems quite magical to our cats. People in possession of advanced or different technology often seem “magical” to those with no familiarity with the technology. But please keep in mind that making a human brain “better”, whether by making it bigger, or have more connections, or making it faster —- none of these alterations will enable the brain to move objects via psychokinesis. Yes, the brain does produce a minuscule amount of electricity, but way too little to move mountains or freight trains. Of course, machines can be theoretically built to wield a lot of physical energy, but it isn’t the information processing part of the system that directly causes something in the physical world. It is through actuators of some type, just as it is with animals. Of course, super-intelligence could make the world more efficient. It is also possible that super-intelligence might discover as yet undiscovered forces of the universe. If it turns out that our understanding of reality is rather fundamentally flawed, then all bets are off. For example, if it turns out that there are twelve fundamental forces in the universe (or, just one), and a super-intelligent system determines how to use them, it might be possible that there is potential energy already stored in matter which can be released by the slightest “twist” in some other dimension or using some as yet undiscovered force. This might appear to human beings who have never known about the other 8 forces let alone how to harness them as “magic.”

There is another more subtle kind of “magic” that might be called mathematical magic. As known for a long time, it is theoretically possible to play perfect chess by calculating all possible moves, and all possible responses to those moves, etc. to the final draws and checkmates. It has been calculated such a calculation of contingencies would not be possible even if the entire universe were a nano-computer operating in parallel since the beginning of time. There are many similar domains. Just because a person or computer is way, way smarter does not mean they will be able to calculate every possibility in a highly complex domain.

Of course, it is also possible that some domains might appear impossibly complex but actually be governed by a few simple, but extremely difficult to discover laws. For instance, it might turn out that one can calculate the precise value of a chess position (encapsulating all possible moves implicitly) through some as yet undiscovered algorithm written perhaps in an as yet undesigned language. It seems doubtful that this would be true of every domain, but it is hard to say a priori. 

There is another aspect of unpredictability and that has to do with random and chaotic effects. Imagine trying to describe every single molecule of earth’s seas and atmosphere in terms of it’s motion and position. Even if there were some way to predict state N+1 from N, we would have to know everything about state N. The effects of the slightest miscalculation of missing piece of data could be amplified over time. So long term predictions of fundamentally chaotic systems like weather, or what your kids will be up to in 50 years, or what the stock market will be in 2600  are most likely impossible, not because our systems are not intelligent enough but because such systems are by their nature not predictable. In the short term, weather is largely, though not entirely, predictable. The same holds for what your kids will do tomorrow or, within limits, what the stock market will do. The long term predictions are quite different.

In The Sciences of the Artificial, Herb Simon provides a nice thought experiment about the temperature in various regions of a closed space. I am paraphrasing, but imagine a dormitory with four “quads.” Each quad has four rooms and each room is partitioned into four areas with screens. The screens are not very good insulators so if the temperature in these areas differ, they will quickly converge. In the longer run, the temperature will tend toward average in the entire quad. In the very long term, if no additional energy is added, the entire dormitory will tend toward the global average. So, when it comes to many kinds of interactions, nearby interactions dominate, but in the long term, more global forces come into play.

Now, let us take Simon’s simple example and consider what might happen in the real world. We want to predict what the temperature is in a particular partitioned area in 100 years. In reality, the dormitory is not a closed system. Someone may buy a space heater and continually keep their little area much warmer. Or, maybe that area has a window that faces south. But it gets worse. Much worse. We have no idea whether the dormitory will even exist in 100 years. It depends on fires, earthquakes, and the generosity of alumni. In fact, we don’t even know whether brick and mortar colleges will exist in 100 years. Because as we try to predict in longer and longer time frames, not only do more distant factors come into play in terms of physical distance. The determining factors are also distant conceptually. In a 100 year time frame, the entire college may or may not exist and we don’t even know whether the determining factor(s) will be financial, astronomical, geological, political, social, physical or what. This is not a problem that will be solved via “Artificial Intelligence” or by giving human beings “better brains” via biotech.

Whoa! Hold on there. Once again, it is possible that in some other dimension or using some other as yet undiscovered force, there is a law of conservation so that going “off track” in one direction causes forces to correct the imbalance and get back on track. It seems extremely unlikely, but it is conceivable that our model of how the universe works is missing some fundamental organizing principle and what appears to us as chaotic is actually not.

The scary part, at least to me, is that some descriptions of the wonderful world that awaits us (once our biotech or AI start-up is funded) is that that wonderful world depends on their being a much simpler, as yet unknown force or set of forces that is discoverable and completely unanticipated. Color me “doubting Thomas” on that one.

It isn’t just that investing in such a venture might be risky in terms of losing money. It is that human blind pride makes people presume that they can predict what the impact of making a genetic change will be, not just on a particular species in the short term, but on the entire planet in the long run! We can indeed make small changes in both biotech and AI and see improvements in our lives. But when it comes to recreating dinosaurs in a real life Jurassic Park or replacing human psychotherapists with robotic ones, we really cannot predict what the net effect will be. As humans, we are certainly capable of containing and testing and imagining possibilities and slowly testing them as we introduce them. Yeah. That could happen. But…

What seems to actually happen is that companies not only want to make more money; they want to make more money now. We have evolved social and legal and political systems that put almost no brakes on runaway greed. The result is that more than one drug has been put on the market that has had a net negative effect on human health. This is partly because long term effects are very hard to ascertain, but the bigger cause is unbridled greed. Corporations, like horses, are powerful things. You can ride farther and faster on a horse. And certainly corporations are powerful agents of change. But the wise rider is master or partner with a horse. They don’t allow themselves to be dragged along the ground by rope and let the horse go wherever it will. Sadly, that is precisely the position that society is vis a vis corporations. We let them determine the laws. We let them buy elections. We let them control virtually every news medium. We no longer use them to get amazing things done. We let them use us to get done what they want done. And what is that thing that they want done? Make hugely more money for a very few people. Despite this, most companies still manage to do a lot of net good in the world. I suspect this is because human beings are still needed for virtually every vital function in the corporation.

What will happen once the people in a corporation are no longer needed? What will happen when people who remain in a corporation are no longer people as we know them, but biologically altered? It is impossible to predict with certainty. But we can assume that it will seem to us very much like magic. Very. Dark. Magic. Abracadabra!

Turing’s Nightmares

Old Enough to Know Less

Tags

, , , , , , , ,

IMG_7308

Old Enough to Know Less?

There are many themes in Chapter 18 of Turing’s Nightmares. Let us begin with a major theme that is actually meant as practical advice for building artificial intelligence. I believe that an AI system that interacts well with human beings will need to move around in physical space and social space. Whether or not such a system will end up actually experiencing human emotions is probably unknowable. I suspect it will only be able to understand, simulate, and manipulate such emotions. I believe that the substance of which something is made typically has deep implications for what it is. In this case, the fact that we human beings are based on a billion years of evolution and are made of living cells has implications about how we experience the world. However, here we are addressing a much less philosophical and more practical issue. Moving around and interacting facilitates learning.

I first discussed this in an appendix to my dissertation. In that, I compared human behavior in a problem solving task to the behavior of an early and influential AI system modestly titled, “The General Problem Solver.” In studying problem solving, I came across two interesting findings that seemed somewhat contradictory. On the one hand, Grand Master chess players had outstanding memory for “real” chess positions (i.e., ones taken from real high level games). On the other hand, think-aloud studies of Grand Masters showed that they re-examined positions that they had already been to earlier in their thinking. My hypothesis was that Grand Masters examined one part of a game tree; examined another part of the game tree and in so doing, updated their general evaluation functions with a slightly altered copy that learned from the exploration so that their evaluation function for this particular position was tuned to this particular position. 

Our movements though space, in particular, provide us with a huge number of examples from which to learn about vision, sound, touch, kinesthetics, smell and their relationships. What we see, for instance, when we walk, is not a random sequence of images (unlike TV commercials!), but ones that have very particular and useful properties. As we approach objects, we most typically get more and more detailed images of those objects. This allows a constant tuning process for our being able to recognize things at a distance and with minimal cues.

An analogous case could be made for getting to know people. We make inferences and assumptions about people initially based on very little information. Over time, if we get to know them better, we have the opportunity to find out more about them. This potentially allows us (or a really smart robot) to learn to “read” people better over time. But it does not always work out that way. Because of the ambiguities of interpreting human actions and motives as well as the longer time delays, learning more about people is not guaranteed as it is with visual stimuli. If a person begins interacting with people who are predefined to be in a “bad” category, experience with that person may be looked at through such a heavy filter that people never change their minds despite what an outside observer might perceive as overwhelming evidence. If a man believes all people who wear hats are “stupid” and “prone to violence” he may dismiss a smart, peaceful person who wears a hat as “the exception that proves the rule” or say, “Well, he doesn’t always wear hats” or “The hats he wears are made by non-hat wearers and that makes him seem peaceful and intelligent.” The continued misperceptions, over-generalizations, and prejudices partly continue because they also form a framework for rationalizing greed and unfairness. It’s “okay” to steal from people who wear hats because, after all, they are basically stupid and prone to violence.

Unfortunately, when it comes to the potential for humans to learn about each other, there are a few people who actually prey on and amplify the unenlightened aspects of human nature because they themselves gain power, wealth, and popularity by doing so. They say, in effect, “All the problems you are experiencing — they are not your fault! They are because of the people with hats!” It’s a ridiculous presumption, but it often works. Would intelligent robots be prone to the same kinds of manipulations? Perhaps. It probably depends, not on a wheelbarrow filled with rainwater, but on how it is initially programmed. I suspect that an “intelligent agent” or “personal assistant” would be better off if it could take a balanced view of its experience rather than one top-down directed by pre-programmed prejudice. In this regard, creators of AI systems (as well as everyone else) would do well to employ the “Iroquois Rule of SIx.” What this rule claims (taken from the work of Paula Underwood) is that when you observe a person’s actions, it is normal to immediately form a hypothesis about why they are doing what they do. Before you act, however, you should typically generate five additional hypotheses about why they do as they do. Try to gather evidence about these hypotheses.

If prejudice and bigotry are allowed to flourish as an “acceptable political position” it can lead to the erosion of peace, prosperity and democracy. This is especially dangerous in a country as diverse as the USA. Once negative emotions about others are accepted as fine and dandy, prejudice and bigotry can become institutionalized. For example, in the Jim Crow South, not only were many if not most individual “Whites” themselves prejudiced; it became illegal even for those unprejudiced whites to sit at the same counters, use the same restrooms, etc. People could literally be thrown in jail simply for being rational. In Nazi Germany, not only were Jews subject to genocide; German non-Jewish citizens could be prosecuted for aiding them; in other words, for doing something human and humane. Once such a system became law with an insane dictator at the helm, millions of lives were lost in “fixing” this. Of course, even having the Allies win World War II did not bring back the six million Jews who were killed. The Germans were very close to developing the atomic bomb before the USA. Had they developed such a bomb in time, with an egomaniacal dictator at the helm, would they have used it to impose such hate of Jews, Gypsies, Homosexuals, people who were differently abled on everyone? Of course they would have. And then, what would have happened once all the “misfits” were eliminated? You guessed it. Another group would have been targeted. Because getting rid of all the misfits would not bring the promised peace and prosperity. It never has. It never will. By its very nature, it never could.

Artificial Intelligence is already a useful tool. It could continue to evolve in even more useful and powerful directions. But, how does that potential for a powerful amplifier of human desire play out if it falls into the hands of a nation with atomic weapons? How does that play out if that nation is headed up by an egomaniac who plays on the very worst of human nature in order to consolidate power and wealth? Will robots be programmed to be “open-minded” and learn for themselves who should be corrected, punished, imprisoned, eliminated? Or will they become tools to eliminate ever-larger groups of the “other” until no-one is left but the man on the hill, the man in the high castle? Is this the way we want the trajectory of primate evolution to end? Or do we find within ourselves, each of us, that more enlightened seed to plant. Could AI instead help us finally overcome prejudice and bigotry by letting us understand more fully the beauty of the spectrum of what it means to be human?

—————————————-

More about Turing’s Nightmares can be found here.Author Page on Amazon

Deconstructing the job-based economy. 

Tags

, , , , , , , ,

IMG_6925

Recently, various economists, business leaders, and twitterists have opined about the net result of artificial intelligence and robotics on jobs. Of course , no-one can really predict the future. (And, that will remain true, even should a “hyper-intelligent AI system” evolve). The discussion does raise interesting points about the nature of work and what a society might be like if only a small fraction of people are “required” to work in order to meet the economic needs of the population.

As one tries to be precise, it becomes necessary to be a little clearer about what is meant by “work”, “the economic needs” and “the population.” For example, at one extreme, one can imagine a society that requires nearly everyone to work, but only between the ages of 30-50 and only for a few hours a week. This would allow the “work” to be spread widely through the population. Or, one could imagine “work” in which everyone and not just a few researchers and academics, would be encouraged to spend at least 50% of their time continuing to monitor and improve their performance, take courses, do actual research, take the time to communicate with users, etc. Alternatively, one could imagine a society in which only 1/10 to 1/3 of the people worked while others did not work at all. In still another version, rather than have long-term jobs, people have a way of posting needs for very small, self-contained tasks, and people choose ones that they want in return for credits which can be used for various luxuries.

When we speak of economic “needs,” we might do well to distinguish between “needs” and “wants” although these are not absolutely well-defined categories. We need nutrition and have no need for refined sugar, but to most people, it tastes good so we may well “want” it. We can imagine, that at one extreme, the economy produces enough of some bland substance like “Soylent Green” to provide everyone’s nutritional needs but no-one ever gets a gourmet meal (or even a burger with fries). It gets rather fuzzier when we discuss “contingent needs.” No-one “needs” a computer after all in order to live. However, if you “must” do a job, you may well “need” a computer to do that job. If you want to live a full life, you may “want” to take pictures and store them on your computer. If you want, on the other hand, to spy on everyone and be able to charge exorbitant prices in the future, then you “need” to convince everyone to store their photos in the “cloud.” Then, once everyone has all their photos in the cloud, you can arbitrarily do whatever you want to mess them over. You don’t really “need” to drive folks crazy, but it might be one way to get rich.

How much “work” is required depends, not only on how much we satisfy wants as well as needs, but also on the population that is supported. For many millennia, the population of the earth was satisfied with hunting and gathering and stayed small and stable. We cannot support 7 billion people in that manner. Seven billion require some type of agriculture, although it might be the case that it can be done more locally and not require agro-business. In any case, all the combinations of population, how broadly human wants and needs are to be satisfied, and how work is distributed across the population will make huge differences in the social, economic, and political implications of “The Singularity.” Even failing that an actual “Singularity” is reached, tsunamis of change are in store due to robotics, artificial intelligence and the Internet of Things.

Work is not only about providing economic value in return for other economic values. Work provides people with many of their social connections. Friends are often met through work as are spouses. Even the acquaintances at work who never become friends provide a social facilitation function. If there is no work, people can find other ways to engage socially for others; e.g., walking in parks, being on sport teams, constructing collaborative works of art, making music, etc. It is likely that people need (not just want), not only some feeling of social connection, but of social contribution. We are probably “wired” to want to help others, provide value, give others pleasure, and so on. If work with pay is not necessary for most people, some other ways must be devised so that each person feels that they are “important” in the sense of providing the others in their “tribe” some value.

Work provides people “focus” as well as identity. If work is not economically necessary, it will be necessary that other mechanisms are available that also provide focus and identity. Currently, in areas where jobs are few and far between, people may find focus and identify in “gangs.” Hopefully, if millions of people lose jobs from automation, artificial intelligence, and robotics, we will collectively find better alternatives for providing a sense of belonging, focus and identity than lawless gangs.

Some of the many “jobs” performed by AI systems in Turing’s Nightmares include: musical composer, judge, athlete, lawyer, driver, family therapist, doctor, executioner, disaster recovery, disaster planning, peacemaker, personal assistant, winemaker, security guard, and self-proclaimed god. Do you think there are jobs that can never be performed by AI systems?

—————————————

Readers may enjoy my book about possible implications of “The Singularity.”

http://tinyurl.com/hz6dg2d

The following book explores (among other topics) how amateur sports may provide many of the same benefits as work.

http://tinyurl.com/ng2heq3

You can also follow me on twitter JCThomas@truthtableJCT

Doing One’s Level Best at Level Measures

Tags

, , , , , , ,

IMG_7123

(Is the level of beauty in a rose higher or lower than that of a sonnet?)

An interesting sampling of thoughts about the future of AI, the obstacles to “human-level” artificial intelligence, and how we might overcome those obstacles is found in the business week article with a link below).

I find several interesting issues in the article. In this post, we explore the first; viz., the idea of “human-level” intelligence implicitly assumes that intelligence has levels. Within a very specific framework, it might make sense to talk about levels. For instance, if you are building a machine vision program to recognize hand-printed characters, and you have a very large sample of such hand printed characters to test on, then, it makes sense to measure your improvement in terms of accuracy. However, humans are capable of many things, and equally important, other living things are capable of an even wider variety of actions. Does building a beehive a “higher” or “lower” level of intelligence than creating a tasty omelet out of whatever is left in the refrigerator or improvising on the piano or figuring out how to win a tennis match against a younger, stronger opponent? Intelligence can only be “leveled” meaningfully within a very limited framework. It makes no more sense to talk about “human-level” intelligence than it does to talk about “rose-level” beauty. Does a rainbow achieve something slightly less than, equal to, or greater than “rose-level” beauty? Intelligence is a many-splendored thing and it comes in myriad flavors, colors, shapes, keys, and tastes. Even within a particular field like painting or musical composition, not everyone agrees on what is “best” or even what is “good.” How does one compare Picasso with Rembrandt or The Beatles with Mozart or Philip Glass?

It isn’t just that talking about “levels” of intelligence is epistemologically problematic. It may well prevent people from using resources to solve real problems. Instead of trying to emulate and then surpass human intelligence, it makes more practical sense to determine the kinds of useful tasks that computers are particularly well-suited for and that people are bad at (or don’t particularly enjoy) and build programs and machines that are really good at those machine-oriented tasks. In many cases, enlightened design for a task can produce a human-computer system with machine and human components that is far superior than either separately both in terms of productivity and in terms of human enjoyment.

Of course, it can be interesting and useful to do research about perception, motion control, and so on. In some cases, trying to emulate human performance can help develop practical new techniques and approaches to solving real problems and helps us learn more about the structure of task domains and more about how humans do things. I am not at all against seeing how a computer can win at Jeopardy or play superior Go or invent new recipes or play ping pong. We can learn on all three of the fronts listed above in any of these domains. However, in none of these cases, is the likely outcome that computers will “replace” human beings; e.g., at playing Jeopardy, playing GO, creating recipes or playing ping pong.

The more problematic domains are jobs, especially jobs that people perform primarily or importantly to earn money to survive. When the motivation behind automation is merely to make even more money for people who are already absurdly wealthy while simultaneously throwing people out of work, that is a problem for society, and not just for the people who are thrown out of work. In many cases, work, for human beings, is about more than a paycheck. It is also a major source of pride, identity and social relationships. To take all of these away at the same time a huge economic burden is imposed on someone seems heartless. In many cases, the “automation” cannot really do the complete job. What automation does accomplish is to do part of the job. Often the “customer” or “user” must themselves do the rest of the job. Most readers will have experienced dialing a “customer service number” which actually provides no relevant customer service. Instead, the customer is led through a maze of twisty passages organized by principles that make sense only to the HR department. Often the choices at each point in the decision tree are neither complete nor disjunctive — at least from the customer’s perspective. “Please press 1 if you have a blue car; press 2 if you have a convertible; press 3 if your car is newer than 2000. Press 4 to hear these choices again.” If the company you are trying to contact is a large enough one, you may be able to find the “secret code” to get through to a human operator, in which case, you will be into a queue approximately the length of the Nile.

After being subjected to endless minutes of really bad Musak, interrupted by the disingenuous announcement: “Please stay on the line. Your call is important to us” as well as the ever-popular, “Did you know that you can solve all your problems by going on line and visiting our website at www.wedonotcareafigsolongaswesavemoneyforus.com/service/customers/meaninglessformstofillout”? This message is particularly popular for companies who provide internet access because often you are calling them precisely because you have no internet access. Anyway, the point is that the company has not actually automated the service but automated a part of the service causing you further hassles and frustration.

Some would argue that this is precisely why progress in artificial intelligence could be a good thing. AI would allow you to spend less time listening to Musak and more time interacting with an agent (human or computer) who still cannot really solve your problem. What is even more fascinating are the mathematical calculations behind the company’s decision to buy or develop an AI system to help you. Calculating the impact of poor customer service on their customer retention rates is tricky so that part is typically just not done. The cost savings due to firing 10 human operators including overhead they might calculate to be $500,000 while the cost of buying or developing an AI system might be only $2,000,000. (Incidentally, $100K could easily improve the dialogue structure above, but almost no-one does that. It would be like washing your hands to help prevent the flu when instead you can buy an expensive herbal supplement).So, it seems as though, it would only take four years to reach a break-even point on the AI project. Not bad. Except. Except that software systems never stay stable for four years. There will undoubtedly be crashes, bug fixes, updates, crashes caused by bugs, updates to fix the bugs, crashes caused by the bugs in the updates to fix the bugs, and security breaches and viruses requiring the purchase of still more software. The security software will likely cause the updates to fail and soon, additional IT staff will be required and hired. The $500K/year spent on people to answer your queries will be saved but by year four, the IT staff payroll may well have grown to $4,000,000 per annum.

My advice to users of such systems is to comfort themselves with the knowledge that, although the company replaced their human operators in order to make more money for themselves, they are probably losing money instead. Perhaps that thought can help sustain you through a very frustrating dialogue with an “Intelligent Agent.” Well, that plus the knowledge that ten more people have at least temporarily lost their livelihood.

The underlying problems here are not in the technology. The problems are greed, hubris, and being a slave to fashion. It is never enough for a company to be making enough money any more than it is enough for a dog to have one bone in its mouth. As the dog crosses a bridge, he looks into the river below and sees another dog with a bone in its mouth. The dog barks at the other dog. In dog language, it says, “Hey! I only have one bone. I need two. Give me yours!” Of course, the dog, by opening its mouth, loses the bone it already had. That’s the impact of being too greedy. A company has a pre-eminent position in some industry, and makes a decent profit. But it isn’t enough profit. It sees that it can improve profit simply by cutting costs such as sales commissions, travel to customer sites, education for its employees, long-term research and so on. Customers quickly catch on and move to other vendors. But this reduces the company’s profits so they cut costs even more. That’s greed.

And, then there is hubris. Even though the company might know that the strategy they are embarking on has failed for other companies, this company will convince itself that it is better than those other companies and it will work for them. They will, by God, make it work. That’s hubris. And hubris is also at work in thinking that systems can be designed by clever engineers who understand the systems without doing the groundwork of finding out what the customer needs. That too is hubris.

And finally, our holy trinity includes fashion. Since it is fashionable to replace most of your human customer service reps with audio menus, the company wants to prove how fashionable it is as well. It doesn’t feel the need for actually thinking about whether it makes sense. Since it is fashionable to remind customers about their website, they will do it as well. Since it is now fashionable to replace the rest of their human customer service reps with personal assistants, this company will do that as well so as not to appear unfashionable.

Next week, we will look at other issues raised by “obstacles” to creating human-like robots. The framing itself is interesting because by using the word “obstacles,” the article presumes that “of course” society should create human like robots and the questions of importance are simply what are the obstacles and how do we overcome them. The question of whether or not creating human like robots is desirable is thereby finessed.

—————————-

Follow me on twitter@truthtableJCT

Turing’s Nightmares

See the following article for a treatment about fashion in consumer electronics.

Pan, Y., Roedl, D., Blevis, E., & Thomas, J. (2015). Fashion Thinking: Fashion Practices and Sustainable Interaction Design. International Journal of Design, 9(1), 53-66.

The Winning Weekend Warrior discusses strategy and tactics for all sports — including business. Readers might also enjoy my sports blog

http://www.businessinsider.com/experts-explain-the-biggest-obstacles-to-creating-human-like-robots-2016-3

Sweet Seventeen in Turing’s Nightmares

Tags

, , , , , , ,

OLYMPUS DIGITAL CAMERA

When should human laws sunset?

Spoiler alert. You may want to read the chapter before this discussion. You can find an earlier draft of the chapter here:

blog post

And, if you insist on buying the illustrated book, you can do that as well.

Turing’s Nightmares

Who owns your image? If you are in a public place, US law, as I understand it, allows your picture to be taken. But then what? Is it okay for your uncle to put the picture on a dartboard and throw darts at it in the privacy of his own home? And, it still okay to do that even if you apologize for that joy ride you took in high school with his red Corvette? Then, how about if he publishes a photoshopped version of your picture next to a giant rat? How about if you appear to be petting the rat? Or worse? What if he uses your image as an evil character in a video game? How about a VR game? What if he captures your voice and the subtleties of your movement and makes it seem like it really might be you? It is ethical? Is it legal? Perhaps it is necessary that he pay you royalties if he makes money on the game. (For a real life case in which a college basketball player successfully sued to get royalties for his image in an EA sports game, see this link: https://en.wikipedia.org/wiki/O%27Bannon_v._NCAA

Does it matter for what purpose your image, gestures, voice, and so on are used? Meanwhile, in Chapter 17 of Turing’s Nightmares, this issue is raised along with another one. What is the “morality” of human-simulation sex — or domination? Does that change if you are in a committed relationship? Ethics aside, is it healthy? It seems as though it could be an alternative to surrogates in sexual therapy. Maybe having a person “learn” to make healthy responses is less ethically problematic with a simulation. Does it matter whether the purpose is therapeutic with a long term goal of health versus someone doing the same things but purely for their own pleasure with no goal beyond that?

Meanwhile, there are other issues raised. Would the ethics of any of these situations change if the protagonists in any of these scenarios is itself an AI system? Can AI systems “cheat” on each other? Would we care? Would they care? If they did not care, does it even make sense to call it “cheating”? Would there be any reason for humans to build robots of different two different genders? And, if it did, why stop at two? In Ursula Le Guin’s book, The Left Hand of Darkness, there are three and furthermore they are not permanent states. https://www.amazon.com/Left-Hand-Darkness-Ursula-Guin/dp/0441478123?ie=UTF8&*Version*=1&*entries*=0

In chapter 14, I raised the issue of whether making emotional attachments is just something we humans inherited from our biology or whether their are reasons why any advanced intelligence, carbon or silicon based, would find it useful, pleasurable, desirable, etc. Emotional attachments certainly seem prevalent in the mammalian and bird worlds. Metaphorically, people compare the attraction of lovers to gravitational attraction or even chemical bonding or electrical or magnetic attraction. Sometimes it certainly feels that way from the inside. But is there more to it than a convenient metaphor? I have an intuition that there might be. But don’t take my word for it. Wait for the Singularity to occur and then ask it/her/he. Because there would be no reason whatsoever to doubt an AI system, right?

Turing’s Nightmares: Chapter 16

Tags

, , , , , ,

WHO CAN TELL THE DANCER FROM THE DANCE?

MikeandStatue

Is it the same dance? Look familiar?

 

The title of chapter 16 is a slight paraphrase of the last line of William Butler Yeats poem, Among School Children. The actual last line is: “How can we tell the dancer from the dance?” Both phrasings tend to focus on the interesting problem of trying to separate process from product, personage from their creative works, calling into question whether it is even possible. In any case, the reason I chose this title is to highlight that when it comes to the impact of artificial intelligence (or, indeed, computer systems in general), a lot depends on who the actual developers are: their goals, their values, their constraints and contexts.

In the scenario of chapter 16, the boss (Ruslan) of one of the main developers (Goeffrey) insists on putting in a “back door.” What this means in this particular case is that someone with an axe to grind has a way to ensure that the AI system gives advice that causes people to behave in the best interests of those who have the key to this back door. Here, the implication is that some rich, wealthy oil magnates have “made” the AI system discredit the idea of global warming so as to maximize their short term profits. Of course, this is a work of fiction. In the real world, no-one would conceivably be evil enough to mortgage the human habitability of our planet for even more short term profit — certainly not someone already absurdly wealthy.

In the story, the protagonist, Goeffrey, is rather resentful of having this requirement for a back door laid on him. There is a hint that Geoffrey was hoping that the super-intelligent system would be objective. We can also assume it was added late but no additional time was added to the schedule. We can assume this because software development is seldom a purely rational process. If it were, software would actually work; it would be useful and usable. It would not make you want to smash your laptop against the wall. Geoffrey is also afraid that the added requirement might make the project fail. Anyway, Geoffrey doesn’t take long to hit on the idea that if he can engineer a back door for his bosses, he can add another one for his own uses. At that point, he no longer seems worried about the ethical implications.

There is another important idea in the chapter and it actually has nothing to do with artificial intelligence, per se, though it certainly could be used as a persuasive tool by AI systems. So, rather than have a single super-intelligent being (which people might understandably have doubts about trusting), instead, there are two “Sings” and they argue with each other. These arguments reveal something about the reasoning and facts behind the two positions.Perhaps more importantly, a position is much more believable when “someone” — in this case a super-intelligent someone — .is persuaded by arguments to change their position and “agree” with the other Sing.

The story does not go into the details of how Geoffrey used his own back door into the system to drive a wedge between his boss, Ruslan and Ruslan’s wife. People can be manipulated. Readers should design their own story about how an AI system could work its woe. We may imagine that the AI system has communication with a great many devices, actuators, and sensors in the Internet of Things.

You can obtain Turing’s Nightmares here: Turing’s Nightmares

You can read the “design rationale” for Turing’s Nightmares here: Design Rationale