Human Control and Autonomy in Cybernetic Systems
This paper is meant to question the idea that humans in cybernetic systems are autonomous in the traditional western liberal conception. Examination of the history of cybernetics reveals conflicts over the role of humans within human/machine systems over a larger period than is traditionally associated with the field of cybernetics. By comparing cybernetic systems from before the 20th century and through the Cold War, it becomes apparent how cybernetics served as an experimental testing ground for political ideologies to express themselves in the budding information age. Comparing cybernetics systems in this manner suggests that cybernetics as a discipline does not inherently support or contradict the ideals of freedom and agency, but rather, cybernetic systems become extensions of the organizations they serve and adopt their parent organizations beliefs.
The following paper examines the subject of human freedom and autonomy within the structure of cybernetic systems. Specifically, this paper will review research on the development and implementation of various communicative and cybernetic systems on multiple levels of magnitude, from the atomic level to the nation state, and over a period of time much longer than what is commonly considered in the range of the history of cybernetics. While the term ‘cybernetics’ may have been popularized and coined by Norbert Wiener in the mid-20th century, research and development of complex human/machine computing system predates Wiener’s work by at least 50 years.
This paper seeks to answer the following question: How does human participation in cybernetic systems conform or conflict with pre-existing political notions of human autonomy and agency from the tradition of western liberalism? Norbert Wiener himself was personally committed to the traditional concept of the liberal self, a notion that he makes quite explicit in his book The Human Use of Human Beings.
As a participant in a liberal outlook which has its main roots in the Western tradition, but which has extended itself to those Eastern countries which have a strong intellectual-moral tradition, and has indeed borrowed deeply from them, I can only state what I myself and those about me consider necessary for the existence of justice. The best words to express these requirements are those of the French Revolution: Liberté, Egalité, Franternité (Wiener 1954, 105).
Despite Wiener’s stated commitment to western liberalism, it was he who made very clear that the problem of communication was fundamentally a problem of control, an observation that he was never able to reconcile with his convictions. Similar attachments of political ideologies to cybernetic projects occurred in other countries throughout the 20th century. Examination of cybernetic projects in Chile and the USSR reveal that just as Wiener was committed to liberalism, other cyberneticians were committed to various interpretations of Marxism, in a reflection of the major ideological battle of the Cold War.
The following literature review will highlight the different ways in which cybernetic principles of control have affected human autonomy and decision-making within human/machine systems. The literature makes explicitly evident that cyberneticians in the 20th century were fully capable of adapting the principles of cybernetics to their pre-existing political ideas, rather than discovering some Platonic ideal form of governance and order that they believed cybernetics suggested. Furthermore, these readings support the idea that cybernetics systems at the social level are inherently political artifacts.
This review is structured around three different levels of abstraction. First, I will review readings examining the relationship between humans and machines, then moving up towards historical examples of early human machine systems. Finally, I will examine historical examples of large scale social cybernetic systems.
Cyborgs, Symbiosis, and Posthumanism
Cyborgs have of course been a hugely important and popular concept in science fiction, but also arguably could be applied to a huge spectrum of phenomena. In her 1985 Manifesto for Cyborgs, Donna Haraway defined the term cyborg as such, “A cyborg is a cybernetic organism, a hybrid of machine and organism, a creature of social reality as well as a creature of fiction.” (Haraway, 191). Ignoring examples from fiction, the human-machine and human-computer hybrid has been examined, studied, and developed for a period of time dating far before films like Blade Runner, Terminator, or Robocop.
Ignoring for now the work done by military contractors in developing automated fire control systems, JR Licklider notes in his 1960 paper Man-Computer Symbiosis that humans have been mechanically extending themselves in various forms that benefitted them. However, Licklider notes that while limited, computers are exceedingly superior to the human mind in a number of ways, “Computing machines can do readily, well, and rapidly many things are too difficult or impossible for man, and men can do readily and well, though not rapidly, many things that are difficult or impossible for computers” (Licklider, 6). Licklider thus concludes that these complementary strengths and weaknesses suggest a symbiotic bond. However, symbiosis implies mutual benefit, and it is unclear how a machine benefits from an arrangement in which it has no agency. Nevertheless, Licklider predicted numerous ways in which humans could benefit from working with computers. However, the term ‘cyborg’ remains somewhat hyperbolic in reference to a human operating a computer. The term implies embodied change in the information processing and structure of the human form, rather than simple mechanical extension.
The possibility of an embodied combination of biological and artificial information processing was alluded to by the 1940 paper What the Frog’s Eye Tells the Frog’s Brain. Authored by Lettvin, Marutana, McCulloch, and Pitt, this group of biologists examined the optic nerves of frogs and determined that the physical structure of the optic nerve determined the kind of information that was collected and processed by the brain. Specifically, they found that the frog’s eye possessed fibers that performed four specific operations. They conclude that the animal brain does not just observe an objective reality, but that it actively constructs the observed world around it. Conceivably, a device that could properly interface with the existing information processing of the nervous system could be integrated into the body of an organism, altering its perception of the world. This hypothetical situation would be closer to the familiar cyborg of fiction and Haraway’s Manifesto.
At the heart of Manifesto however is not an argument for the existence of cyborgs, but a deconstruction of sociopolitical norms that come from the ontological shift that cyborgs represent.
The cyborg is a creature in a postgender world; it has no truck with bissexuality, pre-Oedipal symbiosis, unalienated labor, or other seductions to organic wholeness through a final appropriation of all the powers of the parts into a higher unity. In a sense, the cyborg has no origin story in the Western sense; a ‘final’ irony since the cyborg is also the awful apocalyptic telos of the West’s escalating dominations of abstract individuation, an ultimate self untied at last from all dependency, a man in space. (Haraway, 192)
Haraway uses this conception to deconstruct various branches of feminist thought. At the heart of this critique though, is that much feminist criticism of society is viewed through the way in which society moralizes and regulates a woman’s body. If the body is no longer a permanent fixture, the normative attachments to those fixtures no longer have bearing.
Indeed, this sense of alienation and disconnect from the body is felt even without cybernetic enhancements. This experience of bodily alienation is explored in Shelly Jackson’s 1997 hypertext piece, “My body a Wunderkammer.” In the various vignettes that accompany each of her body parts, Jackson tells stories from her life about each body part. By focusing on her growth and development, she creates a sense that she is an observer who sees her body not as a part of her identity, but as an interface or prosthesis that she was assigned. It is an explicitly sexual piece of work, deconstructing her gendered life experience from her body. The promises of a modular, cyborg body suggest that if these parts could change, the human experience would fundamentally change as well. Our practices are not mere inscriptions, but embodied practices that define our day-to-day life.
As cybernetic prosthesis continues to advance and develop, we will soon have first person accounts from cyborgs first envisioned by science fiction. It is very likely that as a society, we will have to adjust social and cultural expectations regarding those individuals. In looking forward to what we can expect from those individuals, we will examine literature of human/machine systems at the individual, small group, and social levels from the 20th century.
Pre-’Cybernetic’ Human Machine Systems
While Norbert Wiener may have popularized the term ‘cybernetics’ in the 1950 edition of The Human Use of Human Beings, work in the field that we would now associate with cybernetics began at the start of the 20th century. In Between Human and Machine, David Mindell draws causal links between the work of the pre-World War II developments in naval fire control and anti-aircraft computers and the major advances in cybernetics and communication during the post-War years. More important, however, was the continuity of ideas and viewpoints cultivated at the major engineering firms pioneering the work. Decisions made by engineers at Sperry Gyroscope, Ford Instruments, and The Naval Bureau of Ordnance (BuOrd) would go on to help define the epistemological base as well as the direction of cybernetic work well past their lifetimes.
The problem of naval fire control was the first great cybernetic issue to be tackled. As naval technology improved, the act of accurately shooting enemy ships had become exponentially more difficult as the range of guns increased. Fire control is a system with a single goal, shoot an enemy target. The results of its actions are black and white, either a hit or a miss. This result is a function of the following procedure undertaken by the gun.
- Perception – Aiming the gun, looking down the sight
- Integration – leading the target based on its trajectory and velocity
- Articulation – Pulling the trigger
Early automated fire control systems were analog computers which calculated targeting coordinates for officers, who would relay that information to the men physically manning the gun. Humans were deliberately inserted into this system where automation would eventually replace them. However, early on military contractors at BuOrd specifically requested human operators for their guns. At this time, human/machine hybrid systems simply operated at a higher success rate than totally automated systems reliant on servomechanisms to provide feedback. Humans were able to help prevent the excessive oscillation or ‘hunting’ caused by overcompensating servomechanisms.
Noise necessitated human intervention. Electronic data collected by the computer was not always reliable and “even an unskilled human eye could eliminate complications due to erroneous or corrupted data” (Mindell, 92). This was illustrated by the Sperry Gyroscope T-6 anti-aircraft fire control system. This US Army requested a system with minimal human intervention, resulting in nine operators translating feedback from the device, “…one man corresponded to one variable [required to compute a firing solution], and the machine’s requirements for operators corresponded directly to the data flow of its computation”(Mindell, 92). These men were referred to as manual servomechanisms, which ironically mentioned the automated technology that would be used to replace the manual method.
It is fitting that a zero sum military operation such as fire control would explicitly reduce humans to automated tools in their system. Just as military culture was built upon a singular teleological goal, the utilization of humans in its earliest cybernetic systems was equally as one-dimensional. While it is not particularly novel or insightful to claim that the military requires rigid control and discipline from its members, this kind of reduction of humans into single-purpose driven actors functioning as a machine was a novel idea. Though the human fire control operators were suboptimal solutions, they were still a necessity in demonstrating the way in which humans and machines mirrored each other in function, an idea that would later be explored more thoroughly by Wiener and Alan Turing.
Humans and Machines, Entropy and Control
In The Human Use of Human Beings, Wiener brought attention to an old thought experiment published by James Maxwell in 1867. Maxwell’s Demon, as it was called, was the name of a fictional entity as the gatekeeper between two gas chambers, which allowed only high-speed molecules through to one side, thus adding negative entropy to a closed system for a period of time before succumbing to entropy. It would only be able to do this through its structure, as well as an ability to gather and sort information and noise. Wiener recognized that although humans and machines were not closed systems as Maxwell had discussed with his gas chambers, both humans and machines are open systems, which utilize organization and information to resist the trend of entropy.
James Beniger explains this phenomenon in The Control Revolution. More importantly, he explains how organization and order is how living and nonliving matter are differentiated.
“When we compare even the simplest living systems to the most complex inorganic materials, one difference stands out: the much greater organization found in things organic. Living things require many pages to describe the organization of their physical structures, while the structure of an inorganic compund can always be captured in a relatively short string of symbols…Crystals can be uniquely described by a combination of their chemistry and atomic arrangement: only thirty-two different types of symmetry and seven systems of relationships among axes are possible; angles between corresponding faces must be constant. In other words the complexity of crystals derives not from their organization but their order” (Beinger 34).
Organization differs from order because organization is teleological, end driven, unlike order. This difference is significant because organization offers control. An amoeba can control itself, while a rock obviously cannot. This control can be observed by the reduction in entropy as well. However what makes organization and control most interesting is not by comparing organisms to rocks, but comparing an organism to a machine.
Similar to computers, organisms use the nucleotide base pair system in their DNA as the programming language of life. In his definition of a computer in Computer Machinery and Intelligence, Alan Turing made clear that a digital computer would need to not only store data and calculate with it, but it must be controlled through programming.
We have mentioned that the ‘book of rules’ supplied to the computer is replaced in the machine by a part of the [data] store. It is then called the ‘table of instructions’. It is the duty of the control to see that these instructions are obeyed correctly and in the right order [emphasis added]. The control is so constructed that this necessarily happens (Turing, 437).
Turing’s observations that digital computers mimicking human computers led to what he called the ‘imitation game’, or as we know it the Turing Test. He goes on to state that “in about fifty years time it will be possible to programme computers, with a storage capacity of about 10^9 to make them play the imitation game so well that an average interrogator will not have more than 70% chance of making the right identification after five minutes of questioning” (Turing, 442).
Turing goes on to address what he means by the original question of ‘can machines think?’ The goal of the imitation game is not to replicate a human emotion or humanity, but to imitate human logic. Logic and programming are fundamentally limited by storage; however, given an unending growth in storage, possibilities for programming are equally as limitless. Additionally, he faults some of his critics for relying on inductive reasoning to reject his claim. “We cannot so easily convince ourselves of the absence of complete laws of behavior as of complete rules of conduct” (Turing, 452). Turing argues that despite our hubris, humans even today still have major gaps of knowledge on how the human brain works, or makes decisions. The fundamental criticism however is that opponents of his predicting rely on a technologically deterministic worldview in which they believe they have such a complete understanding of both machines and humans to deny a 70% chance probable success rate for machines in the future.
The effectiveness of programming and control in human/machine systems was demonstrated to be very effective. Competition for contracts between engineering firms like GE, Sperry Gyroscope and Ford Instruments pushed development of faster, more accurate, and more automated systems. Within these systems, humans and servomechanisms performed similar work in conjunction with a computer towards a predetermined end. It is in this way that machines and life have a major, and historical difference. Machines are by design and definition teleological systems. They are completely end driven. The goal of the imitation game was not to recreate a human, but just the logic of a human. A fire control computer exists to better shoot down enemy targets. A self-correcting V-2 rocket has a very clear purpose. Life however, is not end driven. Rather than be forced to maintain a stable state of existence toward a predetermined goal, life is internally self-organized. The term for this kind of self-organizing system was coined autopoiesis by the Chilean biologist Humberto Marutana and Francisco Varela.
Early cybernetics was committed to homeostatic models of cybernetics, citing examples from biology and fire control. However the communicative structure of homeostatic systems are end driven in a way that humans are not. We are, after all, free to do as we please with our bodies.
A living system is not a goal-directed system; it is, like the nervous system, a stable state-determined and strictly deterministic system closed on itself and modulated by interactions not specified by its conduct. These modulations, however are apparent as modulations only for the observer who beholds the organism or the nervous system externally, from his own conceptual (descriptive) perspective, as lying in an environment and as elements in his domain of interactions. (Quoted from Hayles, 139).
Maturana and Varela drew autopoiesis as a concept from the observation of how life sustains itself through maintaining its organization.
His key insight was to realize that if the action of the nervous system reflected its organization, the results is a circular, self-reflexive dynamic. A living system’s organization causes certain products to be produced, for example, nucleic acids. These products in turn produce the organization characteristic of that living system. To describe the circularity, he coined the term autopoiesis or self making. (Hayles 136).
Autopoietic systems are characterized by their reflexive nature, but they are not closed systems. As previously mentioned, all closed systems must eventually experience heat death, but organisms exist in an environment, a phenomena Maturana and Varela referred to as ‘coupling’. Most interesting about autopoiesis was the political implication made about humans. They in fact referred to autopoietic systems as machines, fully aware of that implication.
The opposite of autopoiesis was allopoiesis, or systems with an end goal other than producing its own organization, such as a car or gun, a vision that Marutana and Varela extended to humans. Marutana says in Autopoiesis and Control that and ideal society would
…see all human beings as equivalent to oneself, and to love them… without demanding from them a larger surrender of individuality and autonomy than the measure one is willing to accept for oneself while integrating it as observer… is in essence an anarchist society, a society made for and by observers that would not surrender their condition of observers as their only claim to social freedom and mutual respect. (Hayles 142).
This explicitly anarchic thought, alongside the blurring animal/machine distinction that autopoiesis brings along with it, begins broaching a question outside the organization and structure of individual systems. It raises questions towards the role people play in concert with other machines, as well as with each other.
Social Cybernetic Systems
As previously discussed, control and programming are intrinsically linked. Beniger defines control as the “purposeful influence toward a predetermined goal” (Beniger, 7), and additionally goes on to claim that “all control is thus programmed: it depends on physically encoded information, which must include both the goals toward which a process is to be influenced and the procedures for processing additional information toward that end” (Beniger 40). This definition remains consistent with autopoiesis, which has the predetermined goal of replicating and sustaining its own organizational structure. Beniger’s definition of control with programming thus breaks down as follows:
- Control is the organization of matter and energy
- Control is dictated by programming
- Programming requires a processor to process information for the program
Additionally, control exists within three different temporal frames of reference
- Existence – Maintaining organization, counter to entropy and disorder
- Experience – Adapting goal directed processes to variation and change in conditions
- Evolution – reprogramming less successful goals and procedures, preserving successful ones (Feedback)
From these criteria, Beniger identifies four levels of control in human society.
Table 1: Orders of Control
|Level of Control
|Molecules of DNA
|Replication of programming
|Adaptation of genetically controlled processes to environment
|Learned Behavioral Programs
|Behavior controlled by programming stored in memory
|Learned responses to environment, adaptation of program to environment
|Cultural diffusion, purposeful innovation, differential adoption
|Formal Organizations of Individuals
|Explicit Rules and Regulations
|Rationalized processing, record keeping, hierarchical decision making, formal control
|Organizational response to environment, adaptation through reorganization
|Diffusion of rationality with differential adoption as culture, selection pressure on organizations
|Mechanic and Electronic Information Processors
|Purposively designed functions and programs
|Information input and storage, programmed decision making outputs
|Interaction of processor and environment, adaptation through rewriting programming
|Diffusion of processors and programs with differential adoption as culture, government, and market selection
Beniger examined the phenomenon of the rise of the Information Economy through rapid changes in the economy during the 19th century. He argued that the industrialization’s greatest effect was “to speed up a society’s entire material processing system, thereby precipitating what I call a crisis of control, a period in which innovations in information-processing and communication technologies lagged behind those of energy and its application to manufacturing and transportation” (Beniger, vii). Beniger lists a series of historical examples of this, with the safety crisis of railroad management, efficient production and distribution of consumer goods. These crises required a level of organization and control offered in the form of bureaucracy, the apparatus of governmental control, which German Sociologist Max Weber referred to as ‘domination through knowledge.’ Beinger explicitly connects the territory of political management and theory with that of cybernetics and communication.
By connecting control and organization across these multiple levels of existence, Beniger firmly establishes the conceptual framework by which cybernetic principles could logically be applied to social systems. However, while cybernetics may have established certain principles connected to governance in writing, their application on the state level in practice had varied levels of success. No single nation’s experience with industrialization is the same and Beniger focuses heavily on American Industrialization. Cybernetic control appealed to certain individuals in Marxist states, as well as those with Marxist leanings. Attempts at establishing nationwide cybernetic control systems were undergone in Chile as well as the USSR under their respective leaders.
Chile and Cybersyn
Eden Medina effectively chronicled the Chilean attempt at a national cybernetic economic and political management system in Designing Freedom, Regulating a Nation: Socialist Cybernetics in Allende’s Chile. First in Chile, Salvador Allende sought a statewide cybernetic system of economic management in order to modernize Chile’s economy and address specific Marxist criticisms of market forces and industrial labor. Developed under the supervision of Stafford Beer from 1971-1973, Project Cybersyn was an attempt to return control of Chile’s economy to its people through a cybernetic system of real time management and economic planning. Feedback and communication was planned to connect the individual worker on a factory floor all the way to a central decision-making and planning room at the highest level of the government. While the project was aborted along with the rest of Allende’s dream for a democratically elected socialist state in a bloody coup, Cybersyn’s early designs and implementation represent how political and ideological principles could be programmed into a cybernetic system of management.
Chilean history provides a clear example of how alternative geographical and political settings gave rise to new articulations of cybernetic ideas and innovative uses of computer technology, ultimately illustrating the importance of including Latin American experience in these bodies of scholarship. (Medina, 572).
Furthermore, the move towards a superior control system was precipitated by a huge shift in the Chilean economy, not unlike the crisis of control described by Beniger. “…the rapid growth of the nationalised sector quickly created an unwieldy monster.” (Medina 580).
Built upon a nation-wide Telex system, Cybersyn was based off of Beer’s hierarchical five level viable system model, which was in turn reflexively modeled within each level of the system. The five levels of economic management were split into two sections. The bottom three managed day-to-day minutia of production and operation in the factories. They are as follows from the bottom up:
- Management of Individual Plants/Firms
- Information Director
- Director of Operations
In this model, individual plants would report economic indices through the Information Director to the Director of Operations. The system utilized an automated stable-state feedback system, where if the numbers for a specific plant fell outside an acceptable range there would be an alert sent to the Director of Operations to handle. Similarly, if the Director of operations could not handle the issues within an acceptable timeframe, the top two levels of management would intervene.
- Management dedicated to development and decision-making
- Chief Executive
The fourth level of management was a unique addition to existing economic management structures. These two levels would be responsible for the overall direction of Chile’s economy.
This model embodied two distinct principles of Allende’s vision for Chile. Firstly, the traditional source of feedback in capitalism, the market, has no place in this model. Secondly, this model eliminates bureaucratic economic control in favor of automated systems relying on automated systems. These decisions fall neatly in line with Allende’s goal of a nationalized, centrally planned economy for Chile. However, Cybersyn was also concerned with worker participation, addressing the specific Marxist critique of how the proletariat is dehumanized by industrial labor.
The figure of the worker appears at the heart of these systems, reinforcing the perceived importance of workers to the Chilean nation… Here, the worker contributed both physically and mentally to the production process, an illustrated response to Marx’s critique of alienated labour in capitalists societies, where the worker ‘does not develop freely his mental or physical energies but is physically exhausted and mentally debased. (Medina, 597)
The idea was that eventually, workers at every level of production would be trained in using the necessary equipment to effectively use Cybersyn. However, even before the coup, Cybersyn had significant problems.
Firstly was the inherent contradiction of maximizing worker autonomy within a centralized decision-making system. Beer was convinced that he could ‘design freedom’ in his system by altering quantitative values. While Cybersyn never became the economic management system that Beer imagined, it demonstrated the value of a national economic event that disrupted production, and how it could assist in maintaining order. In 1972, the opposition party helped organized nation-wide strikes, which the government was able to resolve with the help of the telex lines established by the project.
“In response to the strike, which threatened the government’s survival, Flores created an emergency operations centre where members of the Cybersyn team and other high-ranking government officials monitored the two-thousand telexes sent per day that covered activities from the northern to southern ends of the country. The rapid flow of messages over the telex lines enable to government to react quickly to the strike activity and mobilise their limited resources in a way that lessened the potential damage cause by the Gremialistas…. Cybersyn participant Herman Schwember remarked ‘The growth of our actual influence and power has exceeded our imagination.’” (Medina, 593)
It was not just in use that Cybersyn abused control, but also in smaller design decisions. For example, within the central Operations Room at the top of the decision system all terminals which displayed information were designed to be operated without the use of keyboards, because typing was considered a feminine job, thus excluding women from the decision making process an active, conscious decision.
Cybersyn ultimately was more concerned with the collective welfare of the state over the autonomy of the workers and individuals who made it up. Although the project faced numerous obstacles from opponents both domestic (in the form of the political opposition) and international (in the form of economic warfare by the Nixon administration), the system did more to reinforce existing structures of management and control than it did to realize Allende’s socialist vision for Chile.
Cybernetics in the USSR
Documented by Slava Gerovitch in From Newspeak to Cyberspeak, Soviet Russia became a significant hub of cybernetic research and implementation. First popular in the military, cyberneticians in the USSR dreamed of realizing their own vision of the proletarian revolution that the Soviet’s had symbolized. However while thinkers like Wiener and Allende attached their own political ideals about individuals to cybernetic systems, the Soviet spirit lacked such an explicit concern with individual welfare and rights. Rather, the principles and ideals of cybernetics were fully adapted by Soviet scientists and institutions to further their own control over political and bureaucratic structures in the USSR.
Gaining notoriety during the Post-Stalin years, cybernetics had to be reconciled with the legacy of political and philosophical thought that the Soviet way was based on, Marxism.
Philosophers loyal to cybernetics not only reconciled cybernetics with dialectical materialism but also effectively worked out a strategic alliance between the two. Cybernetics no longer posed a threat to dialectical materialism; it no longer served as a stick with which [people] tried to chase philosophers out of the domain of science. Cybernetics was tamed and domesticated [emphasis added]; a former rebel turned into a respectable discipline fully compatible with the principles of dialectical materialism. (Gerovitch, 258)
Once cybernetics fell in line with the Party’s goals, it was applied to both military and economic management. While the military bought into cybernetic equipment and munitions in the same way as the West in their part of the Cold War arms race, the application of cybernetics to civilian life proved to work out in a much different way.
Soviet cyberneticians at the Central Economic Mathematical Institute (CEMI) sought to completely automate management and economic decision-making. CEMI dreamed of a system like Cybersyn in which the entire Soviet economy could be managed from Moscow. However, attempts at creating a perfect ‘objective’ model of the economy proved to be impossible, and political opposition manifested itself by various existing bureaucracies and party members unwilling to cede their authority in any way. Additionally, like in Chile, the ‘optimal’ and ‘objective’ model of an economy lacked an accounting for the market completely. Cybernetics was unable to assist the Soviet economy, but was able to find a home in the various ministries and government departments of the government. Notably, the Soviet intelligence apparatus was quick to adopt cybernetics when “agencies quickly realized the opportunities for large-scale information collection and processing that computer technology had opened up.” (Gerovitch, 282). The Soviet intelligence system was able to amass an impressive number of reports and dossiers about thousands of Western individuals and companies in a centralized database.
The legacy of 20th century Soviet cybernetics lay in existing power and control structures utilizing cybernetics to solidify their control. By making social control the first and foremost goal of the discipline, the various Soviet bureaucracies each built their own systems that were well integrated into their existing methods of control. While this ironically meant that the Soviet system lacked a centralized system, it further reinforced their authoritarian social control.
Soviet bureaucrats, in a way, learned the lessons of cybernetics better than did some overenthusiastic cybernetic reformers. Instead of creating a nationwide network, Soviet computerization efforts resulted in a patchwork of incompatible information-management and production-control systems. Instead of facilitating the decentralization of power through computer simulation and market mechanisms, computer technology now served to strengthen control within each ministry. The growing power of ministries quickly reduced the autonomy of individual enterprises to a minimum, and economic reforms were effectively buried. The idea to reform the government with the help of a nationwide automated management system was abandoned. (Gerovitch, 284)
Cybernetics became another tool of Soviet authority, a trend solidified by the harassment and purging of scientists and thinkers who challenged Soviet orthodoxy.
For scientists and academics working in the field, cybernetics has been inextricably linked to control from the beginning. However, societal control is not a monolithic concept. The kind of control needed in a military setting is distinctly different than the kind of control needed for civilian population. The kind of control needed to operate a computer is decidedly different than the kind of control needed to maintain an economy. Further complicating issues of control at the societal level is the fact that there is still an incomplete understanding of how humans function and are controlled. Given this absence of knowledge, it is easy to see how cyberneticians fell back upon their own personal biases regarding the nature of humanity and society.
Political theory, sociology, history and countless other disciplines offer a myriad of examples, theories and models of human and societal control that are internalized by individuals creating cybernetic systems, before they begin to create said systems. Further complicating the issue is that phenomena of individuals who produce radically different interpretations of the same text. Just as no two nations share identical experiences with industrialization, it is not unfair to claim that no two individuals share identical worldviews, morals, or decision-making. Given this complexity, it is no wonder that the Chilean, Soviet, as well as military cybernetic systems fundamentally relied on force, or the threat of force against its individuals in order to achieve its goals. Forcing individuals through violence has always been a reliable method of controlling individuals, though it carries obvious moral and ethical issues.
Systems of control are complex, existing in forms as abstract as social customs and as concrete as a brick wall. As an expression of control, cybernetic systems in the future have the possibility to encode and embody the various contours and nuances of how society controls people. Society by definition organizes and controls people as a superior arrangement than that of the state of nature, and cybernetics has yet to be able to transcend that programming. While certain influential cyberneticians in history have hoped for cybernetics to transcend existing social control, examples from history demonstrate that cybernetics flourish not in breaking existing social structures, but in complementing them.
Beniger, James R.. The control revolution: technological and economic origins of the information society. Cambridge, Mass.: Harvard University Press, 1986. Print.
Gerovitch, Slava. From newspeak to cyberspeak a history of Soviet cybernetics. Cambridge, Mass.: MIT Press, 2002. Print.
Haraway, Donna. “A Manifesto For Cyborgs: Science, Technology, And Socialist Feminism In The 1980s.” Australian Feminist Studies 2.4 (1987): 1-42. Print.
Hayles, N. Katherine. How we became posthuman: virtual bodies in cybernetics, literature, and informatics. Chicago, Ill.: University of Chicago Press, 1999. Print.
Lettvin, J., H. Maturana, W. McCulloch, and W. Pitts. “What The Frog’s Eye Tells The Frog’s Brain.” Proceedings of the IRE 47.11 (1959): 1940-1951. Print.
Licklider, J. C. R.. “Man-Computer Symbiosis.” IRE Transactions on Human Factors in Electronics HFE-1.1 (1960): 4-11. Print.
Medina, Eden. “Designing Freedom, Regulating a Nation: Socialist Cybernetics in Allende’s Chile.” Journal of Latin American Studies 38.03 (2006): 571. Print.
Mindell, David A.. Between human and machine feedback, control, and computing before cybernetics. Baltimore: Johns Hopkins University Press, 2002. Print.
Turing, A. M.. “I.—Computing Machinery And Intelligence.” Mind LIX.236 (1950): 433-460. Print.
Wiener, Norbert. The human use of human beings: cybernetics and society. [2d ed. Garden City, New York: Doubleday, 1954. Print.
“‘my body’ – a Wunderkammer & (Shelly Jackson).” ‘my body’ – a Wunderkammer & (Shelly Jackson). N.p., n.d. Web. 12 Aug. 2014. <http://www.altx.com/thebody/>.
 Marutana would go on to develop the concept of autopoiesis, a concept that will become important later on in this paper.
 It should be noted here that military culture compared to general social culture has an obvious emphasis on totalitarian discipline as a necessary feature of the profession. The military as a system or network of actors is a teleological one. Military conflict (not social use of military but the battles themselves) is an inherently zero sum conflict, with the entire system ideally working toward victory and defeating the opposition.