Man and Machine in the 1960s1
Sungook Hong (IHPST, University of Toronto)
sungook@chass.utoronto.ca
“Remember your humanity and forget the rest.”
(From the invitation to the first Pugwash Conference, 1957)
Introduction
The 1960s was an era of cultural revolution and socio-political upheavals. It was a period of counter-culture movement, Vietnam and student protests, the civil-rights movement, and the beginning of the environmental movement. For some people, the sixties was a Golden Age or “mini-renaissance”; for others it was the age that witnessed the disintegration of traditional values. Culture, as well as human and social relationships, changed fundamentally during this period.2
The sixties was also an age of new science and technology. In molecular genetics, the structure and the function of RNA and the mechanism of genetic coding were discovered. In technology, man landed on the moon in 1969. Contraceptive pills were introduced in the early 1960s, which triggered the sexual revolution, and electrical technologies such as color TVs and music players, as well as computers, became more popular. Some technologies were tightly linked to war. The “electronic battlefield” was first introduced for the Vietnamese War. It replaced the movement of enemy forces with simulated graphics on the computer screen. The rapid stockpiling of hydrogen bombs heightened fears of total annihilation. Films such as 2001: A Space Odyssey (68) and Dr. Strangelove (64), depicted a story in which a frightening relationship emerged between humans and new technologies.3
The purpose of my paper is to discuss new conceptions and ideas about the relationship between man and machine that emerged in the 1960s.4 The relationship had some unique features. There were intense concerns and debates over automation. Man-made machines -- the hydrogen bomb, in particular -- began to threaten the very survival of humanity itself, while cybernetics and the system theory blurred the strict boundary between machine and organism. The development of computer science and artificial intelligence forced people to rethink the nature of human intelligence and understanding, a hallmark of humanity since the time of Descartes. Because of the emergence of these new commonalities between man and machine, humans began searching for a different essence of humanity that could save them from the threat of mechanization and total annihilation. It is in this context of rapid technological change and social upheaval, I will argue, the meaning of humanity shifted from that of intelligence to emotions and feelings.
Mumford and the “Megamachine”
According to Wiener’s cybernetics, an idea he developed in the late 1940s, there is no essential difference between man’s intentional movements and a torpedo that follows its target: both can be explained in terms of control by the feedback of information.5 In the 1960s, cybernetic ideas became more popular for several reasons. First, Wiener’s popular book, God and Golem (1964), was widely circulated and reviewed. Second, around 1960, second-wave cybernetics, in which the observer of a feedback system is included in the system, was proposed by Heinz von Foerster. He extended this idea into the self-organizing system. Several symposiums were held in the late 1950s and 1960s to discuss this new idea.6 And third, two scientists, Mansfield Clynes and Nathan Kline, who worked for the American space program, coined the term “cyborg” in 1960. It stood for a cybernetic organism, a hybrid system of both artifact and organism. They thought that it could give man a freedom “to explore, to create, to think, and to feel” in a highly mechanized environment like a spaceship. Before long, cyborg became a very popular term.7
One of the reasons why the idea of the cyborg captured the public imagination in the 1960s is that it was proposed at a time of intense concerns over automation. During the 1950s, “to ensure future technological progress, increase productivity and ease the strain on workers,”8 the pace of factory mechanization with computerized machinery and cybernetic devices -- that is, automation -- was dramatically increased. How this automation would affect society was not yet certain. There were as much optimism as pessimism among the concerned. Optimists argued that automation would free workers from drudgery and monotonous labor.9 Pessimists and critics, on the other hand, argued that automation would replace workers with machines, and render the remaining workers part of the machinery: “automation not only frees human operators from routine work; it also frees the machinery from the restrictions imposed on it by man’s limitations.”10
The criticism was extended to technological society and technical rationality. Erich Fromm deplored the notion that the ideal man for modern capitalist society was an “automation, the alienated man.” Jacques Ellul’s Technological Society, first published in French in 1954 and translated into English in 1965, blamed modern technology for emphasizing technological efficiency over other important social and humane values. Throughout, he emphasized the following point: “The combination of man and technics is a happy one only if man has no responsibility; ... technique prevail over the human being ... Human caprice crumbles before this necessity; there can be no human autonomy in the face of technical autonomy.” C. Wright Mills also characterized individuals in mass society as “cheerful robots.” Herbert Marcuse, in his widely read One Dimensional Man, criticized technological rationality as a form of control and domination.11
Lewis Mumford was an influential critic as well. In his short article published in Technology and Culture in 1964, “Authoritarian and Democratic Technics,” Mumford divided technics into two types: authoritarian technics which is system-centered and seeks for uniformity and standardization, and democratic technics which is human-centered and values variety and ecological complex. Suggesting that “the inventors of nuclear bombs ... and computers are pyramid builders of our own age,” Mumford pointed out that “through mechanization, automation, cybernetic direction, this authoritarian technics has at last successfully overcome its most serious weakness.” What was its most serious weakness? It was “its original dependence upon resistant and sometimes actively disobedient” humans. To Mumford, technological developments in the 20 century represented an increasing effort to fully incorporate and assimilate disobedient humans into a system of machines. Mumford reasoned that the more technology becomes system-centered, the more it becomes autonomous or alive. It escapes from human control, even the control of “technical and managerial elites.” Authoritarian technics is a “megatechnics” or “megamachine” which has both technoscientific and bureaucratic apparatuses.th
The alternative to megamachine lay in injecting “the rejected parts of human personality” into science and technology. “We cut the whole system back to a point at which it will permit human alternatives, human interventions, and human destinations for entirely different purposes from those of the system itself.” Men must be disobedient. Be a Thoreau rather than a Marx. To support his argument on the significance of human elements, Mumford provided two interesting episodes. The first was the huge electric power failure in the northeast US in 1965. Mumford had cited a magazine article that reported that the electric failure turned the entire city of New York dark and dead, but suddenly “the people [in New York] were more alive than ever.” The second episode was the experience of US astronaut John Glenn. His spaceship was programmed to control itself automatically, but when its automatic control began to malfunction, John Glenn insisted on controlling it manually by sending a message to the US control center. Glenn’s message was in fact the message that Mumford wanted to send: “Let man take over!”12
Mumford’s warning was clear: as technology becomes autonomous, humans become mechanized. “Instead of functioning actively as an autonomous personality, man will become a passive, purposeless machine-conditioned animal.”13 His pessimism was shared by many. The economist John Kenneth Galbraith wrote (in The New Industrial State) that “we are becoming the servants ... of the machine we have created to serve us.” Rene Dubos, a famous microbiologist, also suggested that “technology cannot theoretically escape from human control, but in practice it is proceeding on an essentially independent course.”14 What is interesting here is that the relationship between man and machine had been reversed. Man was no longer a master of his slave, technology: technology had become the master, and man had become its slave.15 Isaac Asimov had introduced the famous “three laws of robotics” in one of his science fictions in 1942, but Asimov’s fictional fear had now become real to Mumford.16
The French philosopher Georges Canguilhem had suggested an essentially new way of thinking about machines as an extension of human organs. Rather than trying to explain organism in terms of machines, “machines can be considered as organs of the human species. A tool or a machine is an organ, and organs are tools or machines.” Canguilhem’s proposal was anti-Cartesian (“the historical reversal of the Cartesian relationship between the machine and the organism”), because he rejected Descartes’s idea of animal-as-a-machine since it gave a special status to humans to control and exploit the organic world including animals. By giving up the old Cartesian notion of the organism as a sort of machine, and by embracing a new philosophy of machines as organic, Canguilhem in effect suggested that Western man’s tendency to enslave organic nature including animals could be stopped.17
According to Mumford, machines not only became part of human organs, but humans became the component of a “megamachine” that exploited organic nature more than ever. Mumford’s megamachine was different from mechanization that began in the early 19 century, although some nineteenth-century thinkers also felt that men became a “hand” of the mechanical system. In “Signs of the Times” (1829), Thomas Carlyle stated that “men are grown mechanical in head and heart, as well as in hand.” Andrew Ure’s Philosophy of Manufactures (1835) described the factory as a “vast automation, composed of various mechanical and intellectual organs, acting in uninterrupted concert for the production of common object, all of them being subordinated to a self-regulated moving force.” Karl Marx also noted that “an organized system of machines, to which motion is communicated by the transmitting mechanism from a central automation, is the most developed form of production by machinery. Here we have, in the place of the isolated machine, a mechanical monster whose body fills whole factories, and whose demon powers, at first veiled under the slow and measured motions of his giant limbs, a length breaks out into the fast and furious whirl of his countless working organs.”th
Mumford’s megamachine was much more than an automated machine or a mechanized factory. It was technocracy plus bureaucracy, with its own methods, philosophy and religion. It was essentially uncontrollable. Mumford was not as pessimistic when he wrote Technics and Civilization in 1934. Here, he insisted that we should absorb “the lessons of objectivity, impersonality, neutrality, [and] the lessons of the mechanical realm.” What made him change his mind about technology? The intellectual and socio-cultural milieu of the 1960s was partly responsible, but we can find an answer to this question in his own writing. It was “mechanization, automation, [and] cybernetic direction” that endowed authoritarian technics with immense power.18
“Cyberscience” and Blurring the Man-Machine Boundary
Several historians of science and technology recently noted that some new branches of science and engineering reinforced each other in the 1950s and 1960s, creating a powerful “discourse of information.” The impact of this discourse was most apparent in molecular biology and genetics. In 1970, the Nobel laureate François Jacob stated that “heredity is described today in terms of information, message, and code” and that “the program [of modern biology] is a model borrowed from electronic computers; it equates the genetic material of an egg with the magnetic tape of a computer.” This altered the relationship between man and machine, said Jacob. Now, “the machine can be described in terms of anatomy and physiology” as much as “organs, cells and molecules are united by a communication network,” which exchange signals and messages.19
The transformations that took place in molecular biology during the 1960s (and 1950s) allowed Jacob to describe its program and methodology in surprisingly novel terms. Lily Kay has discussed the combined influence of Wiener’s cybernetics, Claude Shannon’s information theory, and John von Neumann’s automata upon molecular biology. Shannon’s information was stripped of its semantic values (i.e., meanings) in ordinary languages, having only its technical (i.e., syntactic) values. Information was thus a metaphor, and an information discourse in molecular biology functioned as a “metaphor of metaphor” which transformed the human genome into a sort of text, or a signification, “without a referent.”20 Evelyn Fox Keller has disagreed with Lily Kay about the extent to which these new sciences affected molecular biology. Keller argues that the traffic of information from information theory and cybernetics to molecular biology was almost useless due to differences between genetic information and information defined as negative entropy. As she cites Andre Lwoff, “(biological) functional order cannot be measured in terms of entropy units, and is meaningless from a purely thermodynamical point of view.” Further, she has pointed out that there was traffic in different directions. For example, when molecular biologists were building a new biology by eliminating vital conceptions such as biological functions, a number of physicists and engineers adopted this very traditional idea. But Keller has also acknowledged the importance of “cyberscience” -- information theory, cybernetics, systems analysis, operations research, and computer science -- for providing new metaphors such as information, message, coding, and feedback to molecular biologists. Computers, rather than clocks and steam engines, became the new model for the organism.21
In the same vein, the historian of technology David Channel claimed that a new “bionic world view” or a new concept of “vital machine” emerged in the second-half of the 20 century because of the combined effect of the development of system building, cybernetics, computer science, artificial intelligence, new biomedical engineering like artificial organs and electronic prosthetic devices, and genetic engineering. Many important developments that Channel described took place in the 1960s. Channel particularly emphasized the impact of the systems theory of Ludwig von Bertalanffy. According to Bertalanffy’s general systems theory, which was popularized in the 1960s by the Society for the General System Research (founded in 1954), a system is constituted of various interacting components or subsystems. The most interesting feature of systems theory was that some of the system’s components may function in a more organic way than the others. Similarly, some are more mechanical than the others. In other words, a system consists of both organic and mechanical components, and it is therefore neither wholly organic nor mechanical. This certainly blurred the boundary between organic and mechanical realms.th The systems idea, based on cybernetics and computer science, was used widely in explaining biological, ecological, social, military, and world systems. But while being used in such a overarching way, the systems idea itself became another “technique to shape man and society ever more into the ‘mega-machine’.”22
What is intriguing about “cyberscience” is the military support for it. Take, for example, the origins of cyberscience. Information science, cybernetics, operational research, and computers were created as solutions to the increasing complexity of military operations during WWII. Information science was developed to maximize the efficiency of communication; Wiener’s cybernetics was devised to effectively control a new hybrid system of anti-aircraft predictor; and operation research was exercised for the efficiency of military maneuver; electronic computers were constructed for the calculation of projectiles and the atomic bomb. The link between cyberscience and the military continued well into the sixties. The Perceptron, a neuron computer designed by Frank Rosenblatt, was funded by the Navy (Office of Naval Research), which was eager to solve the problem of increasing complexity of military operations by using a computer that could learn. The Navy, as well as the Air Force, sponsored several symposiums on the self-organizing system in the 1960s, which were attended by biologists, mathematicians, engineers, and logicians. The idea of the self-organizing system was further developed by von Forester at the University of Illinois, whose laboratory was fully supported by the military. The Air Force supported symposiums on bionics, too. The Air Force had been interested in communication networks in complex systems, and, as it is well known now, it was the Air Force that commissioned the ARPA (Advanced Research Project Agency) to devise the first computer network, the Arpanet, which later became the backbone of the Internet. The ARPA’s IPTO (Information Processing Technique Office) supported computer science and research on artificial intelligence such as MIT’s MAC project.23
However, although such military-supported research on artificial intelligence, communication theory, and systems theory eventually changed our understanding of the relationship between man and machine, its impact was barely felt outside the scientific community. The military technology that had the strongest impact on people’s psychology of man, machine, and society was the nuclear bomb. Nuclear bombs made the total annihilation of human beings possible, and people had to learn how to live with such horrible weapons. The most crucial problem of the sixties was “survival.”24
Nuclear Weapons Out-of-Control
As the US President Kennedy said, in the early 1960s the destructive power of nuclear weapons was inconceivable. Renowned scientists -- fifty-two Nobel prize winners -- declared in the “Mainau Declaration”(1956) that “All nations must come to the decision to renounce force as a final resort of policy. If they are not prepared to do this, they will cease to exist.” The horrible impact of nuclear weapons on people’s lives was also highlighted by the publication of the study of Robert Lifton, who lived in Hiroshima for four months in 1962, and investigated the survivors’ lives seventeen years after the bombing. There are many chilling stories and recollections in Lifton’s studies, but the most horrible phenomenon was the survivors’ intimate identification with the dead, incorporating the atomic disaster into “their beings, including all of its elements of horror, evil, and particularly of death.” Later, he repeated a question that had been asked by Herman Kahn: “Would the survivors envy the dead?” Lifton’s answer was: “No, they would be incapable of such feelings. They would not so much envy as ... resemble the dead.” The nuclear war even destroyed the possibilities of “symbolic survival.”25
However, uncertainty dominated horror. There was a deep and essential uncertainty on the issue of nuclear weapons. The strategic analyst Herman Kahn, an expert “defense intellectual” at the Rand Corporation, refuted the opinions of anti-nuclear scientists as “nonsense” and “layman’s view.” His “objective” and “quantitative” studies, performed from a “Systems Analysis point of view,” showed that if a nuclear war occurred between the US and the Soviet Union, only forty to eighty (40-80) million US civilians would die. His point was that after the end of the war, civilization and economy could be rapidly rebuilt by the survivors: “[My] thesis [is] that if proper preparations have been made, it would be possible for us or the Soviets to cope with all the effects of a thermonuclear war, in the sense of saving most people and restoring something close to the prewar standard of living in a relatively short period of time.” The figure he provided, 40-80 million, was significant, because, according to Kahn’s research, most Americans regarded ten to sixty (10-60) million causalities as acceptable in the case of a total war. Sixty million (one-third of the total population) casualties was the limit; Kahn claimed that only forty million US civilians could be killed if we carefully prepared for it. For all this, the US must have enough “capability to launch a first strike in a tense situation that would result from an outrageous Soviet provocation.” But this might induce the Soviet Union to attack the US directly rather than provoking it. Because of this, the US must have enough retaliatory capacity to make the enemy’s first attack unattractive. This thinking, he said, is rational and logical.26
Another strategic analyst, Albert Wohlstetter, criticized the scientists’ involvement in strategic decisions. He quoted Bertrand Russell’s famous letter (1955) in which Russell wrote that “I enclose a statement, signed by some of the most eminent scientific authorities on nuclear warfare,” and then criticized it because “among the ten physicists, chemists, and a mathematical logician who were included, not one to my knowledge had done any empirical study of military operations likely in a nuclear war.” The issue of countermeasures in military conflicts, which involved political, military, and strategic (rather than technological) decisions, should be dealt with by a new discipline and new experts, who relied upon “the [quantitative] method of science,” not on “the authority of science.”27 Herman Kahn was, of course, the most famous, or notorious, expert in this new field of strategic analysis.28
The uncertainty about nuclear capabilities was magnified by technology itself. Herman Kahn identified five different ways in which nuclear war could start, and the first way that had the highest probability was by accident such as false alarms, mechanical error, or human errors. In the same article, however, he discussed the Doomsday Machine, a computerized machine that could destroy the entire earth, and the Doomsday-in-a-Hurry Machine, which was a Doomsday Machine for a different situation. The Suicide-Pact Machine and the Near-Doomsday Machine were also discussed. Although Kahn concluded for the lack of strategic utility in such Doomsday machines, it became evident to most that nuclear wars could be initiated by machines alone. The computerized nuclear system including devastating bombs became too complicated, and appeared to be almost out-of-control. Since the late 1950s, nuclear weapons had “proliferated” like living organism. The arms race was partially accelerated by the “potential volatility of military technology.”29 The situation became more complicated, because such technical uncertainty and uncontrollability could be, and in fact was, used strategically to make a nuclear threat credible to the enemy. This was evident in the US military official’s statement that “we largely abandon to terms and results dictated by the nature of nuclear weapons” by choosing nuclear weapons. Politicians like Richard Nixon and Robert McNamara suggested that the US might start the nuclear war irrationally.30 Philosopher Erich Fromm lamented for the world full of “impotent men directed by virile machines.” Paraphrasing the poet Emerson’s phrase that “things are in the saddle and ride mankind,” Fromm strongly claimed that “we still have a chance to put man back into the saddle.” However, this would not make everyone happy, especially if he thought that “evil is not in things but in man. ... To Control the Bomb is absurd... What we need to control is man.”31
Whose opinion should be trusted? Nobel laureates or the Rand Corporation?32 Kahn argued that anti-nuclear scientists were not logical and rational, while SANE (National Committee for Sane Nuclear Policy) protested that any discussion of the actual use of nuclear bombs were insane and irrational. C. P. Snow also spoke that “between a risk [in the restriction of nuclear armament] and a certainty [in the total disaster], a sane man does not hesitate.” Could science and technology save people? A detailed study of nuclear armament by two scientific advisors concluded the contrary: “it is our considered professional judgement that this [nuclear] dilemma has no technical solution.”33 Whoever was right, there was one thing that people could do: build a nuclear shelter. In 1961, a student at Radcliffe college wrote in an essay that “the construction of shelters has become ... a fad, like the suburban swimming pool; for the rich, [it is] a new luxury, for the handy-man, a do-it-yourself toy.” She then added that “the Bomb ... is a sign of schizophrenic inconsistency;... the shelter represents not a reasoned effort to survive but a senseless gesture.” Schizophrenia was an apt metaphor for the mental status of humans living in the nuclear age.34
Schizophrenic Man, Sane Robots
Schizophrenia was frequently invoked in discussions about the nuclear bomb in the 1960s. It symbolized the inhuman condition of the sixties. Fromm stated that “in the nineteenth century inhumanity meant cruelty; in the twentieth century it means schizoid self-alienation.”35 Recall that Gregory Bateson had proposed a very interesting theory of schizophrenia in 1956, according to which a person may become a schizophrenic if he had been forced to endure (while very young) a “double-binding” situation -- a situation in which he cannot win no matter what he does. A typical double-binding situation was created in a family with a contradictory mother and the “absence of a strong father.”36 One may say that nuclear horror and conflicting authorities pushed the world into a sort of schizophrenic state.37
We can confirm this in the succinct description of the Austrian philosopher and psychiatrist Günter Anders. In his book Burning Conscious (1962), Anders wrote that the reality and the image of nuclear mass murder created the “raging schizophrenia of our day” where people act like “isolated and uncoordinated beings.” Anders’s use of the term schizophrenia was more than metaphoric. The book was a collection of the correspondences between Anders and the “hero of Hiroshima” Major Claude Robert Eatherly, who suffered from “the delayed action of the atomic bomb on its possessors.” In the 1950s, Eatherly had twice attempted suicide, been arrested for fraud, and alternated between court appearances and mental hospitals several times. He had been diagnosed as a schizophrenic, although Bertrand Russell later noted that insanity existed within the society, not him. In his first letter to Eatherly, Anders also defined the condition of mankind as the “technification of our being,” and continued to say that although Eatherly had been used as a screw in a “military machine,” he wanted to be a human again after the Hiroshima disaster. The revival of his humanity was responsible for his schizophrenic mental condition.38
In one of his letters to Anders, Eatherly spoke of nuclear scientists.
I would like to ask you some questions. Could we trust those nuclear scientists to delay their work and paralyze the political and military organizations? Would they be willing to risk their first love by giving up all the grants, laboratories and government support, and to unite and demand a trusted guardian for their brainchild? If they could do this, then we would be safe.39
What if “those nuclear scientists” were also “schizophrenic”? The metaphor of schizophrenia characterized the public image of science and scientists in the 1960s. For example, the well-known microbiologist Rene Dubos stressed in his George Sarton lecture of 1960 that “many modern scientists suffer from the schizophrenic attitude,” because of the disparity between scientists’ claim about the usefulness of science and criticisms from anti-science activists who described scientists as “thoroughly dehumanized” and “mechanized.” Scientists were similar to a product made by a “gadget” called the scientific community.40
Dubos’s comment is interesting, because it links a schizophrenic attitude to the “dehumanized” and “mechanized.” This link was more than metaphorical. The March 1959 issue of Scientific American reported a surprising story about Joey, a “Mechanical Boy,” who thought of himself as a machine or robot while suffering from severe schizophrenia.41 He behaved as if he was a machine; he behaved as if he was controlled by a remote control of his own fantasy. He believed that machines were better than people, because they were stronger. The doctors who treated him eventually discovered that his parents had transformed him a sort of machine by treating him mechanically, without love or tenderness. The doctors therefore tried to revive the sense of human trust and feelings inside him. As Joey made progress, he gradually regained control of the mechanical environments around him. Then, he became able to relate emotionally to people. “Robots cannot live and remain sane. They become “golems” [and] they will destroy their world and themselves.” Before this happened to Joey, humanity went “back into the saddle” and saved him.42
But machines entered the scene again. Five years later, in 1965, the New York Times published an article that reported the use of a machine, Computerized Typewriter (Edison Responsive Environmental Learning System), to successfully treat autism, where standard psychotherapy had failed and no cure or cause was known. The Computerized Typewriter was a human-like machine: it talked, listened, responded, and drew pictures, but it never punished. The doctor who treated autistic children with the machine had noted that many of these children had an abnormal preoccupation with mechanical objects. Several boys who had refused to speak to humans began talking with the machine, and after a year’s of therapy, they began to respond to human conversation. Some were able to return to school.43 A man-made machine -- the nuclear bomb -- pushed people into a schizophrenic mentality (metaphorically), but another machine -- the communication device Computerized Typewriter -- treated it. Was it because, as Bateson believed, human brains are essentially a communication and thinking machine?44
Conclusion: From Intelligence to Emotions
In the 1960s, people perceived, and expressed, new relationships between man and machine. Automation, system theory, cybernetics, genetics, information theory, artificial intelligence, computers, and atomic weapons contributed to these new visions. The visions ranged from optimism to apocalyptic pessimism. Some were close to reality, while others were imaginary and fantastic. The underlying philosophical question, however, remained the same: How can we retain our essential humanity in such a machinized age? What makes us more than machines? As I cited in the epigraph at the beginning of this paper, the first Pugwash Conference invited participants to “Remember your humanity and forget the rest; if you can do so, the way lies open to a new Paradise; if you cannot, there lies before you the risk of universal death.” But what is our essential humanity?45
Since the time of Aristotle, Western people have believed that “the soul” or “the self” could distinguish humans from non-humans.46 The manifestation of the soul’s capacity is most clearly expressed in Descartes’s motto “cogito ergo sum” -- a capacity of reasoning or intelligent thinking. Animals felt, but they could not think. Animals were machines. Therefore, non-animal, man-made machines -- mechanical clocks, the steam engine, Vaucanson’s defecating duck, the steam engine, and the telegraph -- could not think either. But would this distinction remain valid in the age of automation, cybernetics, intelligent computers, self-reproducing automata, and the Doomsday machine?
In a popular exposition of the Turing machine and automata, John Kemeny addressed this question. His conclusion was that “there is no conclusive evidence for an essential gap between man and a machine [like an electronic computer]; for every human activity we can conceive of a mechanical counterpart.”47 Using an evolutionary metaphor, Bruce Mazlish emphasized that the distinction between man and machine has almost disappeared. He epitomized it in the discourse on “fourth discontinuity.” Throughout human history, Mazlish argued, there existed three great thinkers who “outraged man’s naive self-love”: Copernicus, who abolished the discontinuity between the earth and the universe; Darwin, who eliminated the discontinuity between man and animals; and Freud, who erased the discontinuity between the conscious and unconscious. But “a fourth and major discontinuity, or dichotomy, still exists in our time; it is the discontinuity between man and machine.” This discontinuity would be eliminated, Mazlish continued, if we realized that “man and machines are continuous.”48 Herbert Simon also noted that “as we begin to produce mechanisms that think and learn, he has ceased to be the species uniquely capable of complex, intelligent manipulation of his environment.”49 So did Wiener, who claimed that “machines can and do transcend some of the limitations of their designers, and that in doing so they may be both effective and dangerous.”50 John von Neumann, the designer of the stored-program computer and the first automata, pointed out that, to survive technology, we must understand three (not one) essential human qualities: patience, flexibility, and intelligence. Intelligence alone was not enough, because some machines could think.51
But is computer intelligence the same as human intelligence? A theorist in artificial intelligence opposed their identity:
If machines really thought as men do, there would be no more reason to fear them than to fear men. But computer intelligence is indeed “inhuman”: it does not grow, has no emotional basis, and is shallowly motivated. These defects do not matter in technical applications, where the criteria of successful problem solving are relatively simple. They become extremely important if the computer is used to make social decisions, for there our criteria of adequacy are as subtle as multiply motivated as human thinking itself.52
In other words, human intelligence had emotional basis and was deeply motivated. Emotions or feelings became what characterizes human beings. “Our experience of love and beauty” is not just emotions, but “moments of metaphysical insights.” Alfred Whitehead’s “philosophy of feelings” was revoked.53 After several years of treatment, Joey the Mechanized Boy, went out of the hospital for a Memorial Day parade. He held a sign which said: “Feelings are more important than anything under the sun.”54 In the age of smart machines and nuclear holocaust, feelings became what would make humanity.55
Share with your friends: |