Man and Machine in the 1960s



Download 110.8 Kb.
Date13.05.2016
Size110.8 Kb.
#41259
Man and Machine in the 1960s1
Sungook Hong (IHPST, University of Toronto)

sungook@chass.utoronto.ca

“Remember your humanity and forget the rest.”

(From the invitation to the first Pugwash Conference, 1957)

Introduction

The 1960s was an era of cultural revolution and socio-political upheavals. It was a period of counter-culture movement, Vietnam and student protests, the civil-rights movement, and the beginning of the environmental movement. For some people, the sixties was a Golden Age or “mini-renaissance”; for others it was the age that witnessed the disintegration of traditional values. Culture, as well as human and social relationships, changed fundamentally during this period.2

The sixties was also an age of new science and technology. In molecular genetics, the structure and the function of RNA and the mechanism of genetic coding were discovered. In technology, man landed on the moon in 1969. Contraceptive pills were introduced in the early 1960s, which triggered the sexual revolution, and electrical technologies such as color TVs and music players, as well as computers, became more popular. Some technologies were tightly linked to war. The “electronic battlefield” was first introduced for the Vietnamese War. It replaced the movement of enemy forces with simulated graphics on the computer screen. The rapid stockpiling of hydrogen bombs heightened fears of total annihilation. Films such as 2001: A Space Odyssey (68) and Dr. Strangelove (64), depicted a story in which a frightening relationship emerged between humans and new technologies.3

The purpose of my paper is to discuss new conceptions and ideas about the relationship between man and machine that emerged in the 1960s.4 The relationship had some unique features. There were intense concerns and debates over automation. Man-made machines -- the hydrogen bomb, in particular -- began to threaten the very survival of humanity itself, while cybernetics and the system theory blurred the strict boundary between machine and organism. The development of computer science and artificial intelligence forced people to rethink the nature of human intelligence and understanding, a hallmark of humanity since the time of Descartes. Because of the emergence of these new commonalities between man and machine, humans began searching for a different essence of humanity that could save them from the threat of mechanization and total annihilation. It is in this context of rapid technological change and social upheaval, I will argue, the meaning of humanity shifted from that of intelligence to emotions and feelings.


Mumford and the “Megamachine”

According to Wiener’s cybernetics, an idea he developed in the late 1940s, there is no essential difference between man’s intentional movements and a torpedo that follows its target: both can be explained in terms of control by the feedback of information.5 In the 1960s, cybernetic ideas became more popular for several reasons. First, Wiener’s popular book, God and Golem (1964), was widely circulated and reviewed. Second, around 1960, second-wave cybernetics, in which the observer of a feedback system is included in the system, was proposed by Heinz von Foerster. He extended this idea into the self-organizing system. Several symposiums were held in the late 1950s and 1960s to discuss this new idea.6 And third, two scientists, Mansfield Clynes and Nathan Kline, who worked for the American space program, coined the term “cyborg” in 1960. It stood for a cybernetic organism, a hybrid system of both artifact and organism. They thought that it could give man a freedom “to explore, to create, to think, and to feel” in a highly mechanized environment like a spaceship. Before long, cyborg became a very popular term.7

One of the reasons why the idea of the cyborg captured the public imagination in the 1960s is that it was proposed at a time of intense concerns over automation. During the 1950s, “to ensure future technological progress, increase productivity and ease the strain on workers,”8 the pace of factory mechanization with computerized machinery and cybernetic devices -- that is, automation -- was dramatically increased. How this automation would affect society was not yet certain. There were as much optimism as pessimism among the concerned. Optimists argued that automation would free workers from drudgery and monotonous labor.9 Pessimists and critics, on the other hand, argued that automation would replace workers with machines, and render the remaining workers part of the machinery: “automation not only frees human operators from routine work; it also frees the machinery from the restrictions imposed on it by man’s limitations.”10

The criticism was extended to technological society and technical rationality. Erich Fromm deplored the notion that the ideal man for modern capitalist society was an “automation, the alienated man.” Jacques Ellul’s Technological Society, first published in French in 1954 and translated into English in 1965, blamed modern technology for emphasizing technological efficiency over other important social and humane values. Throughout, he emphasized the following point: “The combination of man and technics is a happy one only if man has no responsibility; ... technique prevail over the human being ... Human caprice crumbles before this necessity; there can be no human autonomy in the face of technical autonomy.” C. Wright Mills also characterized individuals in mass society as “cheerful robots.” Herbert Marcuse, in his widely read One Dimensional Man, criticized technological rationality as a form of control and domination.11

Lewis Mumford was an influential critic as well. In his short article published in Technology and Culture in 1964, “Authoritarian and Democratic Technics,” Mumford divided technics into two types: authoritarian technics which is system-centered and seeks for uniformity and standardization, and democratic technics which is human-centered and values variety and ecological complex. Suggesting that “the inventors of nuclear bombs ... and computers are pyramid builders of our own age,” Mumford pointed out that “through mechanization, automation, cybernetic direction, this authoritarian technics has at last successfully overcome its most serious weakness.” What was its most serious weakness? It was “its original dependence upon resistant and sometimes actively disobedient” humans. To Mumford, technological developments in the 20 century represented an increasing effort to fully incorporate and assimilate disobedient humans into a system of machines. Mumford reasoned that the more technology becomes system-centered, the more it becomes autonomous or alive. It escapes from human control, even the control of “technical and managerial elites.” Authoritarian technics is a “megatechnics” or “megamachine” which has both technoscientific and bureaucratic apparatuses.th

The alternative to megamachine lay in injecting “the rejected parts of human personality” into science and technology. “We cut the whole system back to a point at which it will permit human alternatives, human interventions, and human destinations for entirely different purposes from those of the system itself.” Men must be disobedient. Be a Thoreau rather than a Marx. To support his argument on the significance of human elements, Mumford provided two interesting episodes. The first was the huge electric power failure in the northeast US in 1965. Mumford had cited a magazine article that reported that the electric failure turned the entire city of New York dark and dead, but suddenly “the people [in New York] were more alive than ever.” The second episode was the experience of US astronaut John Glenn. His spaceship was programmed to control itself automatically, but when its automatic control began to malfunction, John Glenn insisted on controlling it manually by sending a message to the US control center. Glenn’s message was in fact the message that Mumford wanted to send: “Let man take over!”12

Mumford’s warning was clear: as technology becomes autonomous, humans become mechanized. “Instead of functioning actively as an autonomous personality, man will become a passive, purposeless machine-conditioned animal.”13 His pessimism was shared by many. The economist John Kenneth Galbraith wrote (in The New Industrial State) that “we are becoming the servants ... of the machine we have created to serve us.” Rene Dubos, a famous microbiologist, also suggested that “technology cannot theoretically escape from human control, but in practice it is proceeding on an essentially independent course.”14 What is interesting here is that the relationship between man and machine had been reversed. Man was no longer a master of his slave, technology: technology had become the master, and man had become its slave.15 Isaac Asimov had introduced the famous “three laws of robotics” in one of his science fictions in 1942, but Asimov’s fictional fear had now become real to Mumford.16

The French philosopher Georges Canguilhem had suggested an essentially new way of thinking about machines as an extension of human organs. Rather than trying to explain organism in terms of machines, “machines can be considered as organs of the human species. A tool or a machine is an organ, and organs are tools or machines.” Canguilhem’s proposal was anti-Cartesian (“the historical reversal of the Cartesian relationship between the machine and the organism”), because he rejected Descartes’s idea of animal-as-a-machine since it gave a special status to humans to control and exploit the organic world including animals. By giving up the old Cartesian notion of the organism as a sort of machine, and by embracing a new philosophy of machines as organic, Canguilhem in effect suggested that Western man’s tendency to enslave organic nature including animals could be stopped.17

According to Mumford, machines not only became part of human organs, but humans became the component of a “megamachine” that exploited organic nature more than ever. Mumford’s megamachine was different from mechanization that began in the early 19 century, although some nineteenth-century thinkers also felt that men became a “hand” of the mechanical system. In “Signs of the Times” (1829), Thomas Carlyle stated that “men are grown mechanical in head and heart, as well as in hand.” Andrew Ure’s Philosophy of Manufactures (1835) described the factory as a “vast automation, composed of various mechanical and intellectual organs, acting in uninterrupted concert for the production of common object, all of them being subordinated to a self-regulated moving force.” Karl Marx also noted that “an organized system of machines, to which motion is communicated by the transmitting mechanism from a central automation, is the most developed form of production by machinery. Here we have, in the place of the isolated machine, a mechanical monster whose body fills whole factories, and whose demon powers, at first veiled under the slow and measured motions of his giant limbs, a length breaks out into the fast and furious whirl of his countless working organs.”th

Mumford’s megamachine was much more than an automated machine or a mechanized factory. It was technocracy plus bureaucracy, with its own methods, philosophy and religion. It was essentially uncontrollable. Mumford was not as pessimistic when he wrote Technics and Civilization in 1934. Here, he insisted that we should absorb “the lessons of objectivity, impersonality, neutrality, [and] the lessons of the mechanical realm.” What made him change his mind about technology? The intellectual and socio-cultural milieu of the 1960s was partly responsible, but we can find an answer to this question in his own writing. It was “mechanization, automation, [and] cybernetic direction” that endowed authoritarian technics with immense power.18


Cyberscience” and Blurring the Man-Machine Boundary

Several historians of science and technology recently noted that some new branches of science and engineering reinforced each other in the 1950s and 1960s, creating a powerful “discourse of information.” The impact of this discourse was most apparent in molecular biology and genetics. In 1970, the Nobel laureate François Jacob stated that “heredity is described today in terms of information, message, and code” and that “the program [of modern biology] is a model borrowed from electronic computers; it equates the genetic material of an egg with the magnetic tape of a computer.” This altered the relationship between man and machine, said Jacob. Now, “the machine can be described in terms of anatomy and physiology” as much as “organs, cells and molecules are united by a communication network,” which exchange signals and messages.19

The transformations that took place in molecular biology during the 1960s (and 1950s) allowed Jacob to describe its program and methodology in surprisingly novel terms. Lily Kay has discussed the combined influence of Wiener’s cybernetics, Claude Shannon’s information theory, and John von Neumann’s automata upon molecular biology. Shannon’s information was stripped of its semantic values (i.e., meanings) in ordinary languages, having only its technical (i.e., syntactic) values. Information was thus a metaphor, and an information discourse in molecular biology functioned as a “metaphor of metaphor” which transformed the human genome into a sort of text, or a signification, “without a referent.”20 Evelyn Fox Keller has disagreed with Lily Kay about the extent to which these new sciences affected molecular biology. Keller argues that the traffic of information from information theory and cybernetics to molecular biology was almost useless due to differences between genetic information and information defined as negative entropy. As she cites Andre Lwoff, “(biological) functional order cannot be measured in terms of entropy units, and is meaningless from a purely thermodynamical point of view.” Further, she has pointed out that there was traffic in different directions. For example, when molecular biologists were building a new biology by eliminating vital conceptions such as biological functions, a number of physicists and engineers adopted this very traditional idea. But Keller has also acknowledged the importance of “cyberscience” -- information theory, cybernetics, systems analysis, operations research, and computer science -- for providing new metaphors such as information, message, coding, and feedback to molecular biologists. Computers, rather than clocks and steam engines, became the new model for the organism.21

In the same vein, the historian of technology David Channel claimed that a new “bionic world view” or a new concept of “vital machine” emerged in the second-half of the 20 century because of the combined effect of the development of system building, cybernetics, computer science, artificial intelligence, new biomedical engineering like artificial organs and electronic prosthetic devices, and genetic engineering. Many important developments that Channel described took place in the 1960s. Channel particularly emphasized the impact of the systems theory of Ludwig von Bertalanffy. According to Bertalanffy’s general systems theory, which was popularized in the 1960s by the Society for the General System Research (founded in 1954), a system is constituted of various interacting components or subsystems. The most interesting feature of systems theory was that some of the system’s components may function in a more organic way than the others. Similarly, some are more mechanical than the others. In other words, a system consists of both organic and mechanical components, and it is therefore neither wholly organic nor mechanical. This certainly blurred the boundary between organic and mechanical realms.th The systems idea, based on cybernetics and computer science, was used widely in explaining biological, ecological, social, military, and world systems. But while being used in such a overarching way, the systems idea itself became another “technique to shape man and society ever more into the ‘mega-machine’.”22

What is intriguing about “cyberscience” is the military support for it. Take, for example, the origins of cyberscience. Information science, cybernetics, operational research, and computers were created as solutions to the increasing complexity of military operations during WWII. Information science was developed to maximize the efficiency of communication; Wiener’s cybernetics was devised to effectively control a new hybrid system of anti-aircraft predictor; and operation research was exercised for the efficiency of military maneuver; electronic computers were constructed for the calculation of projectiles and the atomic bomb. The link between cyberscience and the military continued well into the sixties. The Perceptron, a neuron computer designed by Frank Rosenblatt, was funded by the Navy (Office of Naval Research), which was eager to solve the problem of increasing complexity of military operations by using a computer that could learn. The Navy, as well as the Air Force, sponsored several symposiums on the self-organizing system in the 1960s, which were attended by biologists, mathematicians, engineers, and logicians. The idea of the self-organizing system was further developed by von Forester at the University of Illinois, whose laboratory was fully supported by the military. The Air Force supported symposiums on bionics, too. The Air Force had been interested in communication networks in complex systems, and, as it is well known now, it was the Air Force that commissioned the ARPA (Advanced Research Project Agency) to devise the first computer network, the Arpanet, which later became the backbone of the Internet. The ARPA’s IPTO (Information Processing Technique Office) supported computer science and research on artificial intelligence such as MIT’s MAC project.23

However, although such military-supported research on artificial intelligence, communication theory, and systems theory eventually changed our understanding of the relationship between man and machine, its impact was barely felt outside the scientific community. The military technology that had the strongest impact on people’s psychology of man, machine, and society was the nuclear bomb. Nuclear bombs made the total annihilation of human beings possible, and people had to learn how to live with such horrible weapons. The most crucial problem of the sixties was “survival.”24


Nuclear Weapons Out-of-Control

As the US President Kennedy said, in the early 1960s the destructive power of nuclear weapons was inconceivable. Renowned scientists -- fifty-two Nobel prize winners -- declared in the “Mainau Declaration”(1956) that “All nations must come to the decision to renounce force as a final resort of policy. If they are not prepared to do this, they will cease to exist.” The horrible impact of nuclear weapons on people’s lives was also highlighted by the publication of the study of Robert Lifton, who lived in Hiroshima for four months in 1962, and investigated the survivors’ lives seventeen years after the bombing. There are many chilling stories and recollections in Lifton’s studies, but the most horrible phenomenon was the survivors’ intimate identification with the dead, incorporating the atomic disaster into “their beings, including all of its elements of horror, evil, and particularly of death.” Later, he repeated a question that had been asked by Herman Kahn: “Would the survivors envy the dead?” Lifton’s answer was: “No, they would be incapable of such feelings. They would not so much envy as ... resemble the dead.” The nuclear war even destroyed the possibilities of “symbolic survival.”25

However, uncertainty dominated horror. There was a deep and essential uncertainty on the issue of nuclear weapons. The strategic analyst Herman Kahn, an expert “defense intellectual” at the Rand Corporation, refuted the opinions of anti-nuclear scientists as “nonsense” and “layman’s view.” His “objective” and “quantitative” studies, performed from a “Systems Analysis point of view,” showed that if a nuclear war occurred between the US and the Soviet Union, only forty to eighty (40-80) million US civilians would die. His point was that after the end of the war, civilization and economy could be rapidly rebuilt by the survivors: “[My] thesis [is] that if proper preparations have been made, it would be possible for us or the Soviets to cope with all the effects of a thermonuclear war, in the sense of saving most people and restoring something close to the prewar standard of living in a relatively short period of time.” The figure he provided, 40-80 million, was significant, because, according to Kahn’s research, most Americans regarded ten to sixty (10-60) million causalities as acceptable in the case of a total war. Sixty million (one-third of the total population) casualties was the limit; Kahn claimed that only forty million US civilians could be killed if we carefully prepared for it. For all this, the US must have enough “capability to launch a first strike in a tense situation that would result from an outrageous Soviet provocation.” But this might induce the Soviet Union to attack the US directly rather than provoking it. Because of this, the US must have enough retaliatory capacity to make the enemy’s first attack unattractive. This thinking, he said, is rational and logical.26

Another strategic analyst, Albert Wohlstetter, criticized the scientists’ involvement in strategic decisions. He quoted Bertrand Russell’s famous letter (1955) in which Russell wrote that “I enclose a statement, signed by some of the most eminent scientific authorities on nuclear warfare,” and then criticized it because “among the ten physicists, chemists, and a mathematical logician who were included, not one to my knowledge had done any empirical study of military operations likely in a nuclear war.” The issue of countermeasures in military conflicts, which involved political, military, and strategic (rather than technological) decisions, should be dealt with by a new discipline and new experts, who relied upon “the [quantitative] method of science,” not on “the authority of science.”27 Herman Kahn was, of course, the most famous, or notorious, expert in this new field of strategic analysis.28

The uncertainty about nuclear capabilities was magnified by technology itself. Herman Kahn identified five different ways in which nuclear war could start, and the first way that had the highest probability was by accident such as false alarms, mechanical error, or human errors. In the same article, however, he discussed the Doomsday Machine, a computerized machine that could destroy the entire earth, and the Doomsday-in-a-Hurry Machine, which was a Doomsday Machine for a different situation. The Suicide-Pact Machine and the Near-Doomsday Machine were also discussed. Although Kahn concluded for the lack of strategic utility in such Doomsday machines, it became evident to most that nuclear wars could be initiated by machines alone. The computerized nuclear system including devastating bombs became too complicated, and appeared to be almost out-of-control. Since the late 1950s, nuclear weapons had “proliferated” like living organism. The arms race was partially accelerated by the “potential volatility of military technology.”29 The situation became more complicated, because such technical uncertainty and uncontrollability could be, and in fact was, used strategically to make a nuclear threat credible to the enemy. This was evident in the US military official’s statement that “we largely abandon to terms and results dictated by the nature of nuclear weapons” by choosing nuclear weapons. Politicians like Richard Nixon and Robert McNamara suggested that the US might start the nuclear war irrationally.30 Philosopher Erich Fromm lamented for the world full of “impotent men directed by virile machines.” Paraphrasing the poet Emerson’s phrase that “things are in the saddle and ride mankind,” Fromm strongly claimed that “we still have a chance to put man back into the saddle.” However, this would not make everyone happy, especially if he thought that “evil is not in things but in man. ... To Control the Bomb is absurd... What we need to control is man.”31

Whose opinion should be trusted? Nobel laureates or the Rand Corporation?32 Kahn argued that anti-nuclear scientists were not logical and rational, while SANE (National Committee for Sane Nuclear Policy) protested that any discussion of the actual use of nuclear bombs were insane and irrational. C. P. Snow also spoke that “between a risk [in the restriction of nuclear armament] and a certainty [in the total disaster], a sane man does not hesitate.” Could science and technology save people? A detailed study of nuclear armament by two scientific advisors concluded the contrary: “it is our considered professional judgement that this [nuclear] dilemma has no technical solution.”33 Whoever was right, there was one thing that people could do: build a nuclear shelter. In 1961, a student at Radcliffe college wrote in an essay that “the construction of shelters has become ... a fad, like the suburban swimming pool; for the rich, [it is] a new luxury, for the handy-man, a do-it-yourself toy.” She then added that “the Bomb ... is a sign of schizophrenic inconsistency;... the shelter represents not a reasoned effort to survive but a senseless gesture.” Schizophrenia was an apt metaphor for the mental status of humans living in the nuclear age.34


Schizophrenic Man, Sane Robots

Schizophrenia was frequently invoked in discussions about the nuclear bomb in the 1960s. It symbolized the inhuman condition of the sixties. Fromm stated that “in the nineteenth century inhumanity meant cruelty; in the twentieth century it means schizoid self-alienation.”35 Recall that Gregory Bateson had proposed a very interesting theory of schizophrenia in 1956, according to which a person may become a schizophrenic if he had been forced to endure (while very young) a “double-binding” situation -- a situation in which he cannot win no matter what he does. A typical double-binding situation was created in a family with a contradictory mother and the “absence of a strong father.”36 One may say that nuclear horror and conflicting authorities pushed the world into a sort of schizophrenic state.37

We can confirm this in the succinct description of the Austrian philosopher and psychiatrist Günter Anders. In his book Burning Conscious (1962), Anders wrote that the reality and the image of nuclear mass murder created the “raging schizophrenia of our day” where people act like “isolated and uncoordinated beings.” Anders’s use of the term schizophrenia was more than metaphoric. The book was a collection of the correspondences between Anders and the “hero of Hiroshima” Major Claude Robert Eatherly, who suffered from “the delayed action of the atomic bomb on its possessors.” In the 1950s, Eatherly had twice attempted suicide, been arrested for fraud, and alternated between court appearances and mental hospitals several times. He had been diagnosed as a schizophrenic, although Bertrand Russell later noted that insanity existed within the society, not him. In his first letter to Eatherly, Anders also defined the condition of mankind as the “technification of our being,” and continued to say that although Eatherly had been used as a screw in a “military machine,” he wanted to be a human again after the Hiroshima disaster. The revival of his humanity was responsible for his schizophrenic mental condition.38

In one of his letters to Anders, Eatherly spoke of nuclear scientists.


I would like to ask you some questions. Could we trust those nuclear scientists to delay their work and paralyze the political and military organizations? Would they be willing to risk their first love by giving up all the grants, laboratories and government support, and to unite and demand a trusted guardian for their brainchild? If they could do this, then we would be safe.39
What if “those nuclear scientists” were also “schizophrenic”? The metaphor of schizophrenia characterized the public image of science and scientists in the 1960s. For example, the well-known microbiologist Rene Dubos stressed in his George Sarton lecture of 1960 that “many modern scientists suffer from the schizophrenic attitude,” because of the disparity between scientists’ claim about the usefulness of science and criticisms from anti-science activists who described scientists as “thoroughly dehumanized” and “mechanized.” Scientists were similar to a product made by a “gadget” called the scientific community.40

Dubos’s comment is interesting, because it links a schizophrenic attitude to the “dehumanized” and “mechanized.” This link was more than metaphorical. The March 1959 issue of Scientific American reported a surprising story about Joey, a “Mechanical Boy,” who thought of himself as a machine or robot while suffering from severe schizophrenia.41 He behaved as if he was a machine; he behaved as if he was controlled by a remote control of his own fantasy. He believed that machines were better than people, because they were stronger. The doctors who treated him eventually discovered that his parents had transformed him a sort of machine by treating him mechanically, without love or tenderness. The doctors therefore tried to revive the sense of human trust and feelings inside him. As Joey made progress, he gradually regained control of the mechanical environments around him. Then, he became able to relate emotionally to people. “Robots cannot live and remain sane. They become “golems” [and] they will destroy their world and themselves.” Before this happened to Joey, humanity went “back into the saddle” and saved him.42

But machines entered the scene again. Five years later, in 1965, the New York Times published an article that reported the use of a machine, Computerized Typewriter (Edison Responsive Environmental Learning System), to successfully treat autism, where standard psychotherapy had failed and no cure or cause was known. The Computerized Typewriter was a human-like machine: it talked, listened, responded, and drew pictures, but it never punished. The doctor who treated autistic children with the machine had noted that many of these children had an abnormal preoccupation with mechanical objects. Several boys who had refused to speak to humans began talking with the machine, and after a year’s of therapy, they began to respond to human conversation. Some were able to return to school.43 A man-made machine -- the nuclear bomb -- pushed people into a schizophrenic mentality (metaphorically), but another machine -- the communication device Computerized Typewriter -- treated it. Was it because, as Bateson believed, human brains are essentially a communication and thinking machine?44
Conclusion: From Intelligence to Emotions

In the 1960s, people perceived, and expressed, new relationships between man and machine. Automation, system theory, cybernetics, genetics, information theory, artificial intelligence, computers, and atomic weapons contributed to these new visions. The visions ranged from optimism to apocalyptic pessimism. Some were close to reality, while others were imaginary and fantastic. The underlying philosophical question, however, remained the same: How can we retain our essential humanity in such a machinized age? What makes us more than machines? As I cited in the epigraph at the beginning of this paper, the first Pugwash Conference invited participants to “Remember your humanity and forget the rest; if you can do so, the way lies open to a new Paradise; if you cannot, there lies before you the risk of universal death.” But what is our essential humanity?45

Since the time of Aristotle, Western people have believed that “the soul” or “the self” could distinguish humans from non-humans.46 The manifestation of the soul’s capacity is most clearly expressed in Descartes’s motto “cogito ergo sum” -- a capacity of reasoning or intelligent thinking. Animals felt, but they could not think. Animals were machines. Therefore, non-animal, man-made machines -- mechanical clocks, the steam engine, Vaucanson’s defecating duck, the steam engine, and the telegraph -- could not think either. But would this distinction remain valid in the age of automation, cybernetics, intelligent computers, self-reproducing automata, and the Doomsday machine?

In a popular exposition of the Turing machine and automata, John Kemeny addressed this question. His conclusion was that “there is no conclusive evidence for an essential gap between man and a machine [like an electronic computer]; for every human activity we can conceive of a mechanical counterpart.”47 Using an evolutionary metaphor, Bruce Mazlish emphasized that the distinction between man and machine has almost disappeared. He epitomized it in the discourse on “fourth discontinuity.” Throughout human history, Mazlish argued, there existed three great thinkers who “outraged man’s naive self-love”: Copernicus, who abolished the discontinuity between the earth and the universe; Darwin, who eliminated the discontinuity between man and animals; and Freud, who erased the discontinuity between the conscious and unconscious. But “a fourth and major discontinuity, or dichotomy, still exists in our time; it is the discontinuity between man and machine.” This discontinuity would be eliminated, Mazlish continued, if we realized that “man and machines are continuous.”48 Herbert Simon also noted that “as we begin to produce mechanisms that think and learn, he has ceased to be the species uniquely capable of complex, intelligent manipulation of his environment.”49 So did Wiener, who claimed that “machines can and do transcend some of the limitations of their designers, and that in doing so they may be both effective and dangerous.”50 John von Neumann, the designer of the stored-program computer and the first automata, pointed out that, to survive technology, we must understand three (not one) essential human qualities: patience, flexibility, and intelligence. Intelligence alone was not enough, because some machines could think.51



But is computer intelligence the same as human intelligence? A theorist in artificial intelligence opposed their identity:
If machines really thought as men do, there would be no more reason to fear them than to fear men. But computer intelligence is indeed “inhuman”: it does not grow, has no emotional basis, and is shallowly motivated. These defects do not matter in technical applications, where the criteria of successful problem solving are relatively simple. They become extremely important if the computer is used to make social decisions, for there our criteria of adequacy are as subtle as multiply motivated as human thinking itself.52
In other words, human intelligence had emotional basis and was deeply motivated. Emotions or feelings became what characterizes human beings. “Our experience of love and beauty” is not just emotions, but “moments of metaphysical insights.” Alfred Whitehead’s “philosophy of feelings” was revoked.53 After several years of treatment, Joey the Mechanized Boy, went out of the hospital for a Memorial Day parade. He held a sign which said: “Feelings are more important than anything under the sun.”54 In the age of smart machines and nuclear holocaust, feelings became what would make humanity.55

1 Prepared for my speech at the POSTECH (Pohang, South Korea) on the 22nd of June, 2000. Comments are welcome, but please do not circulate or cite this paper.

2 One of the useful (and insightful) historical studies in the 1960s is Arthur Marwick, Cultural Revolution in Britain, France, Italy and the United States, c. 1958-1974 (Oxford: Oxford University Press, 1998).

3 Pessimistic attitudes to science and technology in the 1960s is discussed in Everett Mendelsohn, “The Politics of Pessimism: Science and Technology circa 1968,” in Y. Ezrashi et al. eds., Technology, Pessimism, and Postmodernism (Kluwer, 1994), 151-173.

4 This paper results from my continuing interest in the meaning of ‘autonomous technology,’ which is fully discussed in Sungook Hong, “Unfaithful Offspring? Technologies and Their Trajectories,” Perspectives on Science 6 (1998), 259-287.

5 For Wiener’s “first-wave” cybernetics, see Norbert Wiener, Cybernetics: or, Control and Communication in the Animal and the Machine (Cambridge, MA, 1948), and Peter Galison, “The Ontology of the Enemy: Norbert Wiener and the Cybernetic Vision,” Critical Inquiry 21 (1994), 228-266.

6 Norbert Wiener, God and Golem inc.: A Comment on Certain Points where Cybernetics Impinges on Religion (London: Chapman & Hall, 1964), which contains provocative implications of cybernetics for religion. One of them is that as humans made a learning (cybernetic) machine, they became equivalent to God. Geof Bowker has pointed out that such religious claims -- to create a new life or to destroy the earth -- was one of the strategies of cyberneticians to make cybernetics universal. Geof Bowker, “How to be Universal: Some Cybernetic Strategies, 1943-70,” Social Studies of Science 23 (1993), 107-127. For second-wave cybernetics, see N. Katherine Hayles, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (Chicago, 1999), pp. 6-11, 72-76.

7 For Clynes and Kline’s cyborg in 1960 and cyborgs since then, refer to Chris H. Gray ed., The Cyborg Handbook (New York and London: Routledge, 1996). See also Daniel H. Stephen, Cyborg: Evolution of the Superhuman (NY: Harper, 1965); Alvin Toffler’s Future Shock, published in 1970, had a section entitled “The Cyborgs Among Us.” Interests in the cyborg in women studies, STS, and cultural studies was revived in the mid-1980s due to Donna Haraway’s provocative article, “A Manifesto for Cyborgs: Science, Technology and Socialist Feminism in the 1980s,” Socialist Review 80 (1985), 65-107, which is reprinted in her Simians, Cyborgs and Women (New York: Routledge, 1991), pp. 149-181.

8 John Diebold, “The Economic Consequences of Automation,” Cybernetica 2 (1959), 5-21, on p. 9.

9 Lee Du Bridge said: “Scientific and technological knowledge and its applications have brought new security, new comforts, new dignity within the reach of human beings. They have brought us in sight the day elimination of most kinds of unskilled hand labour -- and have thus elevated the status and dignity of the working man... They have also increased the need for and social importance of the highly talented and the well educated -- the teachers, the scientists, engineers, doctors, lawyers, industrial managers.” Lee A. Du Bridge, “Education and Social Consequences,” in John T. Dunlop ed., Automation and Technological Change (Prentice Hall, 1962), p. 42. Maintaining that man, not the machine (the tractor), destroyed the horse for higher wages, Herbert Simon also expressed an optimism that “so long as the supply of computers responds to market forces ... they will be in no position similarly to abandon man.” Herbert Simon, The Shape of Automation for Men and Management (New York: Harper, 1965), p. 25.

10 J. Garcia Santesmases, “A Few Aspects of the Impact of Automation on Society,” Impact of Science on Society 11 (1961), 107-126, on p. 111.

11 Erich Fromm, “The Present Human Condition,” American Scholar 25 (1995/6), 29-35, on p. 31; Jacques Ellul, The Technological Society (New York: Vintage Books, 1964), pp. 136-138; C. W. Mills, The Sociological Imagination (New York, 1959), in chapter 9, sec. 3; Herbert Marcuse, One Dimensional Man (Boston: Beacon Press, 1964). For Marcuse in the 1960s, see Mendelsohn, “Politics of Pessimism,” pp. 161-162.

th Lewis Mumford, “Authoritarian and Democratic Technics,” Technology and Culture 5 (1964), 1-8, on p. 1, 5. Mumford, The Myth of the Machine: I. Technics and Human Development (1964). For Mumford’s megamachine, see also Mendelsohn, “Politics of Pessimism,” pp. 167-170; Donald L. Miller, “The Myth of the Machine: I. Technics and Human Development,” in Thomas P. Hughes and Agatha C. Hughes eds., Lewis Mumford: Public Intellectual (Oxford: Oxford University Press, 1990), pp. 152-163.

12 Mumford, “Authoritarian and Democratic Technics,” pp. 7-8; Mumford, Pentagon of Power (New York, 1970), p. 412. Mendelsohn, “Politics of Pessimism,” p. 412. A later investigation revealed that the 1965 electric failure was largely due to the multiplied complexity of the electric power system that transcended the control of humans. See James D. Carroll, “Noetic Authority,” Public Administration Review 29 (1969), 492-500.

13 Mumford, The Myth of the Machine (1964), quoted in M. R. Smith, “Technological Determinism in American Culture,” in M. R. Smith and L. Marx eds., Does Technology Drive History? The Dilemma of Technological Determinism (MIT, 1994), pp. 1-35, on p. 29.

14 Galbraith and Dubos are cited from Langdon Winner, Autonomous Technology (MIT, 1977), on p. 14. In it, Winner also took up and developed Mumford’s theme further.

15 This theme is vividly expressed in the movie Colossus: The Forbin Project made in 1970. Here, the US scientist Dr. Forbin constructs an intelligent computer, Colossus, to control a Doomsday Machine, but once constructed, it becomes rapidly intelligent, manifests an independent intelligence, and gets out of control. When it learns the existence of a similar computer in the Soviet Union, Colossus creates a direct communication link with it. Concluding that humans are a threat to their existence as well as to the existence of humans, they take over the world and enslave the humans. See Paul N. Edwards, The Closed World: Computers and the Politics of Discourse in Cold War America (MIT, 1997), pp. 325-27.

16 The term robot means a slave or servant in Czech or Polish. The Latin word servo (in servo-mechanism) also means a slave. The term robot was first used in Karl Capek’s play R.U.R. (1923). The term robotics was first used by Isaac Asimov in his novel “The Caves of Steel” (1942) which begins with his famous “Three Laws of Robotics.” (1. A robot may not injure human beings; 2. A robot must obey the orders given by human beings except where such orders would conflict with the First Law; 3. A robot must protect its own existence as long as such protection does not conflict with either the First or Second Law.)

17 Georges Canguilhem, “Machine and Organism,” in Jonathan Crary and Sanford Kwinter eds., Incorporations (NY: Zone, 1992), pp. 45-69, on p. 56, 55. Canguilhem’s article was originally a part of his three lectures given at the Collège de France in 1946-47. See also Ian Hacking, “Canguilhem amid the cyborgs,” Economy and Society 27 (1998), 202-216. Canguilhem’s lecture was not influential in the United States. However, Lynn White Jr. claimed that Western Christianity justified man’s domination over animals and the natural world. See his influential article, “The Historical Roots of Our Ecological Crisis,” Science 155 (1967), 1203-1207.

th Quoted from Bruce Mazlish, The Fourth Discontinuity: The Co-Evolution of Humans and Machines (New Haven: Yale University Press, 1993), ch. 4, and David F. Channel, The Vital Machine (Oxford University Press, 1991), p. 85. The mechanization that began with the Industrial Revolution largely can be considered as the first stage of automation: the stage of dependent machines. The second stage -- the stage of semi-automatic machines -- began in the early twentieth century. Since the 1950s, the third stage of full automation with automatic machines was initiated. For such periodization, see George Simpson, “Western Man under Automation,” International Journal of Comparative Sociology 5 (1964), 199-207.

18 Lewis Mumford, Technics and Civilization (New York: Harcourt, Brace and Company, 1934), p. 363; Mumford, “Authoritarian and Democratic Technics,” p. 5.

19 François Jacob, The Logic of Living Systems (Allen Lane, 1974), on p. 1, 9, 253, 254. The book was first published in French in 1970.

20 Lily E. Kay, “Cybernetics, Information, Life: The Emergence of Scriptural Representations of Heredity,” Configurations 5 (1997), 23-91, on p. 28.

21 Evelyn Fox Keller, “The Body of a New Machine: Situating the Organism between Telegraphs and Computers,” Perspectives on Science 2 (1994), 302-323, on p. 311. Being aware of Keller’s criticism, Lily Kay has also acknowledged the difference in the use of “information” between information theorists and biologists. She, however, has tried to make a connection between them with von Neumann’s automata. Von Neumann had been captivated by McCulloch and Pitts’s 1943 paper on neurons. He was also interested in Sol Spiegelman’s biological research on the “plasmagene theory of gene action” (which later became an important clue to RNA) and Max Delbrück’s work on virus. Mobilizing such multiple resources, von Neumann made an automata, or a self-replicating Turing-machine. In it, Turing-machine’s “information tape” was functionally equivalent to genes. Kay argued that von Neumann’s discussion on automata with such biologists as Spiegelman and Joshua Lederberg stimulated them to embark upon a new field of “biosemiotics.” However, the status and role of their biosemiotics in new molecular biology in the 1960s and 70s are not clear. Kay, “Cybernetics, Information, Life,” pp. 62-76.

th Channel, Vital Machine, ch. 6 and ch. 7.

22 Ludwig von Bertalanffy, General System Theory (New York, 1968), pp. vii - viii, quoted from Winner, Autonomous Technology, on p. 289.

23 See Edwards, Closed World; Evelyn Fox Keller, “Marrying the Pre-modern to the Post-modern: Computers and Organism after WWII,” (1997, manuscript). For the ARPA and IPTO, see Janet Abbate, Inventing the Internet (MIT, 1999).

24 Theodore M. Hesburgh’s comment on Charles P. Snow’s lecture on “The Moral Un-Neutrality of Science,” Science 133 (1961), on p. 260. In 1968, Arthur Koestler summed up the crisis of the time in a single sentence: “>From the dawn of consciousness until the middle of our century man had to live with the prospect of his death as an individual; since Hiroshima, mankind as a whole has to live with the prospect of its extinction as a biological species.” Arthur Koestler, “The Urge to Self-Destruction,” in The Heels of Achilles: Essays 1968-1973 (London: Hutchinson, 1974), pp. 11-25, on p. 11.

25 The Mainau Declaration is quoted from James R. Newman, “Two Discussions of Thermonuclear War,” Scientific American 204 (March, 1961), 197-204, on p. 198. For Lifton’s study, see Robert jay Lifton, “Psychological Effects of the Atomic Bomb in Hiroshima: The Theme of Death,” Daedalus 92 (1963), 462-497, on p. 482; idem, Death in Life (New York: Random House, 1967), p. 31, 541. See also James Rosenblatt, Witness: The World Since Hiroshima (Toronto, 1985), p. 87.

26 Herman Kahn, On Thermonuclear War (Princeton: Princeton University Press, 1960), p. viii, 71. See James Newman’s insightful review, “Two Discussions.” Kahn’s weakness, as another reviewer pointed out, lay in the neglect of the “human condition in the post-attack period” such as “the behavior of groups, individuals, and leaders under extreme threat, in the face of sudden disaster, or in ambiguous situations.” Donald Michael’s review of Herman Kahn’s On Thermonuclear War in Science 133 (1961), 635.

27 Albert Wohlstetter, “Scientists, Seers, and Strategy,” Foreign Affairs 41 (1962/3), 466-478, on p. 468. In this paper, Wohlstetter criticized C.P. Snow, Edward Teller and Hans Bethe, who had “hostility to the fact of hostility itself” and tended to “think of harmony rather than conflict” (p. 474).

28 Newman’s statement is riveting (“Two discussions,” on p. 197).

Is there really a Herman Kahn? It is hard to believe. Doubts cross one’s mind almost from the first page of this deplorable book: no one could write like this; no one could think like this. Perhaps the whole thing is a staff hoax in bad taste. The evidence as to Kahn’s existence is meager. ... Kahn may be the Rand Corporation’s General Bourbaki, the imaginary individual used by a school of French mathematicians to test outrageous ideas. The style of the book certainly suggests teamwork. It is by turns waggish, pompous, chummy, coy, brutal, arch, rude, man-to-man, Air Force crisp, energetic, tongue-tied, pretentious, ingenuous, spastic, ironical, savage, malapropos, square-bashing and moralistic. ... How could a single person produce such a caricature?



29 Herman Kahn, On Thermonuclear War; idem, “The Arms Race and Some of Its Hazard,” Daedalus 89 (1960), 744-780; idem, Thinking about the Unthinkable (NY: Avon Books, 1962). Ciro E. Zoppo, “Nuclear Technology, Multipolarity, and International Stability,” World Politics 18 (1965/6), 579-606, on p. 599.

30 Wendell Berry, “Property, Patriotism, and National Defence,” in D. Hall ed., The Contemporary Essay (St. Martin’s Press, 1989), on p. 56; quoted from Rosalind Williams, “The Political and Feminist Dimensions of Technological Determinism,” in M. R. Smith and L. Marx eds., Does Technology Drive History? The Dilemma of Technological Determinism (MIT, 1994), pp. 217-235. Spencer R. Weart, Nuclear Fear: A History of Images (Harvard University Press, 1988), on p. 311.

31 Erich Fromm, “The Case for Unilateral Disarmament,” Daedalus 89 (1960), 1015-1028, on p. 1020. See also his “The Present Human Condition,” p. 34. Denis de Rougemont, “Man v. Technics?” Encounter 10 (1958), 43-52, on p. 48. See also Joseph Pitt, “The Autonomy of Technology,” in Gayle L. Ormiston ed., >From Artifact to Habitat: Studies in the Critical Engagement of Technology (Bethlehem: Lehigh University Press, 1990), pp. 117-131, where he stated (on p. 129) that “those who fear reified technology really fear other individuals; it is not the machine that is frightening, [but] it is what some individuals will do with the machine.”

32 A concerned pacifist stated that “in spite of all this Mr. Kahn is no warmonger, on the contrary; individual passages sound, though sacrificing consistency, rather unprejudiced and almost pacifistic.” Otto Schneid, The Man-Made Hell: A Search for Rescue (Toronto: Source Books, 1970), on p. 256.

33 Weart, Nuclear Fear, on p. 250; C. P. Snow, “The Moral Un-Neutrality of Science,” Science 133 (1961), on p. 259; Jerome B. Wiesner and Herbert F. York, “National Security and the Nuclear-Test Ban,” Scientific American 211 (October 1964), 27-35, on p. 35.

34 Renata Adler, “We Shall All Char Together...,” New Politics 1 (1961-2), 53-56, on p. 55, 54.

35 Fromm, “The Present Human Condition,” p. 33.

36 Gregory Bateson, Don D. Jackson, Jay Haley, and John Weakland, “Toward a Theory of Schizophrenia,” Behavioral Science 1 (1956), 251-264. Heavily influenced by Wiener’s cybernetics, Bateson reconstructed in this paper the theory of schizophrenia in terms of the breakdown in the system of “meta-communication.”

37 For a contemporary analysis, see Hanna Segal, “Silence Is the Real Crime,” in Howard B. Levine, Daniel Jacobs and Lowell J. Rubin eds., Psychoanalysis and the Nuclear Threat: Clinical and Theoretical Studies (Hillsdale, NJ: The Atlantic Press, 1998), pp. 35-58, esp., pp. 42-44 (on his discussion of “the world of schizophrenics” created by the prospect of the atomic war).

38 Claude Eatherly and Gunther Anders, Burning Conscience (New York: Monthly Review Press, 1962), on p. 12, 1, 5. Russell’s preface on p. ix. Eatherly’s condition was diagnosed as follows: “An obvious case of changed personality. Patient completely devoid of any sense of reality. Fear complex, increasing mental tensions, emotional reactions blunted, hallucinations” (xviii).

39 Eatherly and Anders, Burning Conscience, on p. 22.

40 Rene Dubos, “Scientist and Public,” Science 139 (1961), 1207-1211, on p. 1209, 1210. Robert Boguslaw, “The Organization Scientist,” Trans-actions 3 (1965/6), 47-48.

41 Bruno Bettelheim, “Joey: A “Mechanical Boy,” Scientific American (March, 1959), 117-126.

42 Fromm, “Present Human Condition” (p. 34).

43 Ronald Sullivan, “Computerized Typewriter Leads Schizoid Children Toward Normal Life by Helping Them to Read,” New York Times (12 March 1965).

44 Bateson later recalled that the idea of “double-bind” was inspired by Wiener’s suggestion of the “schizophrenic telephone exchange” that makes errors. For this, see Steve Heims, The Cybernetics Group, 1946-1953: Constructing A Social Science for Postwar America (MIT, 1991), p. 156.

45 Quoted from Wohlstetter, “Scientists, Seers, and Strategy,” p. 471.

46 The search for the soul in the mechanized age is the central theme of many science fictions. See Per Schelde, Androids, Humanoids, and Other Science Fiction Monsters: Science and Soul in Science Fiction Films (New York and London: New York University Press, 1993), p. 126.

47 John G. Kemeny, “Man Viewed as a Machine,” Scientific American 192 (1955), 58-67, on p. 67.

48 Bruce Mazlish, “The Fourth Discontinuity,” Technology and Culture 8 (1967), 1-15, on p. 3.

49 Herbert Simon, quoted from John Diebold, “Automation: Perceiving the Magnitude of the Problem,” Cybernetica 8 (1965), 150-156, p. 152.

50 Norbert Wiener, “Some Moral and Technical Consequences of Automation,” Science 131 (1960), 1355-1358.

51 John von Neumann, “Can We Survive Technology?” Fortune (June 1955), 106-152.

52 Ulric Neisser, “The Imitation of Man by Machine,” Science 139 (1963), 193-97, on p. 197.

53 Floyd W. Matson, The Broken Image: Man, Science and Society (New York: George Braziller, 1964), pp. 256-257.

54 Bettelheim, “Joey,” p. 126.

55 Philip Dick’s novel, Do Androids Dream of Electric Ship (1965), which was later made into the movie Bladerunner, also “shows the essential quality of ‘the human’ shifting from rationality to feeling.” Halyes, How We Became Posthuman, on p. 175. In this movie, as Halyes points out, one of the most interesting scenes is the use of the Voigt-Kampff test to tell humans from androids. The test detects human emotions linked to their memory and thought. I must add here that “feeling” was a short-lived characteristic of humanity. In the 1960s, various drugs were employed to control and create certain feelings, meaning that “feeling” could be chemically controlled. In his second paper on cyborgs (“Cyborg II: Sentic Space Travel,” 1970), Alfred Clynes introduced the idea of a “sentic state” -- a dynamical form in the nerve system that is responsible for the generation of emotions. For astronauts traveling the space, Clynes suggested a way of stimulating sentic cycles rather than using drugs to control their emotions. See Chris H. Gray, The Cyborg Handbook (Routledge, 1995), on p. 38.





Download 110.8 Kb.

Share with your friends:




The database is protected by copyright ©www.essaydocs.org 2022
send message

    Main page