The human flaw in the machine
Updated: Feb 9, 2019
By guest author: Dr. Isabella Hermann - Ms. AI Ambassador for AI & Politics
The fear that machines will take over even the most complicated jobs has been present in films for more than half a century. We can still learn a lot from these fictional portrayals today. The two greatest lessons: Most of the mistakes that machines make are already made by humans when they create them. And: The smartest decision of a system of artificial intelligence (AI) can sometimes be to turn itself off.
Since the 1950s, the science fiction genre has been intensively occupied with robots and computers. While works from this early period, such as the robot stories by Isaac Asimov or the illustrations by Arthur Radebaugh, were very optimistic about future technological progress, fears of domination were increasingly addressed in the 1960s. The showpiece is Stanley Kubrick's film 2001: A Space Odyssey from 1968, in which the control computer of the spaceship Discovery – HAL 9000 – turns against the astronauts on board.
The computer that was supposed to render a starship captain superfluous
In the same year the episode "The Ultimate Computer" of the original Star Trek series with Captain James T. Kirk on the starship Enterprise was broadcasted on US television for the first time. The now 50-year-old episode addresses the concern that computers could make people obsolete, "take away" our jobs, and draw conclusions through bad programming that were not intended by human coders. These fears are very similar to the ones being discussed right now; however the episode is set a lot further to the future – namely the one of the fictional Star Trek universe of the 2260s. In the episode, the revolutionary tactical and control computer “M5” is being designed to operate Starfleet spaceships fully autonomously without human assistance. The system is to undergo a practical test with navigation manoeuvres and complex war games on the Enterprise, with only a skeleton crew remaining on board. In a conversation with his friend and Chief Medical Officer Dr. Leonard McCoy, Kirk expresses his concern that he could lose his job as captain of the Enterprise:
“Am I afraid of losing command to a computer? I can do a lot of other things. Am I afraid of losing the prestige and the power that goes with being a starship captain? Is that why I'm fighting it? Am I that petty?”
The illusion of a computer always acting logically
It is remarkable that M5 is not only meant to take over routine tasks, which have been done by humans so far, but also the highly demanding and respected "job" of the captain himself. Likewise, our current discussions are no longer just about the concern that only simpler tasks such as supermarket cashiers or call center agents could be replaced by computer systems. Also professions which are associated with high qualification and a certain social status - lawyers or physicians for example - could be affected according to many studies. M5 is introduced at the beginning of the episode as a system being superior to humans and assessing objectively. Commander Spock, first officer and member of the extraterrestrial race of Vulcans, who act free of emotions and only according to logic, is enthusiastic about M5 - as far as his feelings allow: “Captain, the computer does not judge. It makes logical selections”
There are no neutral decisions - not even by AI
With some sham hits the first test with an approaching enemy spacecraft works just fine. So it's all the more surprising that M5 then fires live ammunition at an unexpectedly appearing freighter and kills people. The computer obviously has a defect and is in the midst of doing even worse damage. Fortunately, Kirk can convince it in conversation that it is against its purpose to kill people.
Thereupon M5 switches itself off. How could this malfunction occur? The explanation is very simple: The creator of M5 programmed the system with his own memory structure and thus transferred his own psychological abnormalities to the computer system.
There are thus no neutral and objectively assessing computers. AI always reflects the conscious and unconscious decisions of the developers and data sets with which the system was trained. The conclusions that computers draw from their algorithms and the data fed into them are therefore logical according to the way the systems function, but not necessarily socially desirable. Therefore, in the current ethical discussion about AI, the discrimination of social groups by such automated systems is and must be addressed.
The “Ultimate Computer” could mean the “last” or the “best” computer. At the end, the best computer might be the one that turns itself off. One of the pioneers of AI research, Marvin Minsky, invented the "Ultimate Machine" back in the 1950s. It is a box with an on and off switch. When the machine is switched on, the box opens, out comes a hand that puts the lever back on "off" and disappears back into the box.