Misfits and architecture machines

A few days ago, I wrote about some basics of cybernetics, concluding with a snippet from Gordon Pask’s “The Architectural Relevance of Cybernetics. “Let us turn the design paradigm in upon itself,” he wrote, “let us apply it to the interaction between the designer and the system he designs, rather than the interaction between the system and the people who inhabit it.”[1]

This idea proved very attractive to two young architects: Christopher Alexander and Nicholas Negroponte. When architects started engaging with cybernetics, they saw new possibilities for how designers would work. Technology would surprise and challenge the designer, would break down design problems into smaller parts, would address issues of complexity.

Christopher Alexander applied cybernetics and AI (among other disciplines) to architecture in an attempt to address the growing complexity of design problems. He noted the difficulty of designing for intermeshing systems, even when the designed object itself (whether something as big as a village or as small as a teapot) seemed uncomplicated. “In spite of their superficial simplicity, even these problems have a background of needs and activities which is becoming too complex to grasp intuitively,” he wrote in Notes on the Synthesis of Form in 1964.[2] The design process he described in Notes required a computer to analyze complex sets of data to define “misfits”—design requirements—that the designer ameliorated by creating a form that solved the problem.

While Nicholas Negroponte is best known today as a technology guru and founder of the MIT Media Lab, I’m interested in his architectural background and the notion of “architecture machines”— evolving systems that worked in “symbiosis” with designer and resident that Negroponte thought would change the making of architecture. As director of the Architecture Machine Group at MIT, founded in 1968, he assembled a theory of how such systems would work in the 1970 book The Architecture Machine (dedicated “to the first machine that can appreciate the gesture” [3]) and the 1975 book Soft Architecture Machines, and a series of computer-aided design tools and programs throughout the 1970s.

An architecture machine, in Negroponte’s estimation, would turn the design process into a dialogue that would alter the traditional human-machine dynamic. He wrote, “The dialogue would be so intimate—even exclusive—that only mutual persuasion and compromise would bring about ideas, ideas unrealizable by either conversant alone. No doubt, in such a symbiosis it would not be solely the human designer who would decide when the machine is relevant.”[4] In order to achieve the design goals and close relationship with the user the machine would have to incorporate artificial intelligence, he wrote, “because any design procedure, set of rules, or truism is tenuous, if not subversive, when used out of context or regardless of context.”[5] Intelligence for Negroponte is thus not a passive quality but an active one, expressed through behavior, and improved over time.

However, building a successful architecture machine proved a much more difficult concept in practice because of the quality of interaction they achieved and their designer’s overall fascination with bells and whistles. The URBAN5 (1967) program was Negroponte’s first, major computer-aided design program that sought to use his ideas about conversation, dialogue and intelligence. In his own judgment, it failed because it could not adapt and its dialogue was too primitive. The shortcomings of URBAN5 led the Architecture Machine Group to develop “The Architecture Machine”—a time-sharing computer that in addition to typical peripherals, had a camera interface on wheels (GROPE), robot arm (SEEK), tablet-based sketching stations and “an assemblage of software.” Negroponte wrote, “The prognostications of hardware enumerated in wanton fantasy have been achieved and even superseded in the actual Architecture Machine of 1974. All too often we spend our time making better operating systems, fancier computer graphics, and more reliable hardware, yet begging the major issues of understanding either the making of architecture or the makings of intelligence.”[6] “The Architecture Machine” was perhaps a failure of its own success.

Today, computer-aided design systems proceed as we ask them to. They don’t jump in and do things for us, they don’t create new layouts, they don’t have conversations with us. Expert systems often fall short: they guess wrong, they get in the way. But have we thrown out baby with bathwater? In the surprises and the challenges of our systems, perhaps we would come up with things we never would have imagined.

[1] Gordon Pask, “The Architectural Relevance of Cybernetics,” Architectural Design 7, no. 6 (1969): 496.
[2] Christopher Alexander, Notes on the Synthesis of Form (Cambridge,: Harvard University Press, 1964), 3.
[3] Nicholas Negroponte, The Architecture Machine (Cambridge, Mass.,: M.I.T. Press, 1970), 11-12.
[4] Ibid., 11-12.
[5] Ibid., 1.
[6] Negroponte, Soft Architecture Machines, (Cambridge, Mass.,: M.I.T. Press, 1975), 157-71.

This is one of 50 posts about cyborgs, a project commemorating the use of the term.

A network of constant interactions and communications

[This post is a part of a month of Cyborgs, a project started by Quiet Babylon’s Tim Maly. It’s the first of two.]

To get to cyborgs, we need to start with cybernetics.

Norbert Wiener. Image source: Complex Fields blog.

Cybernetics is a network of constant interactions and communications. Norbert Wiener (1894–1964) coined the term in 1948 from the Greek word for steersman. The term describes feedback — communication and control in systems—where a system obtains information on its progress, assesses the feedback, corrects its course and receives further feedback on the success of the transmission.

The genesis of cybernetics took place in the belly of ballistics and radar development during World War II. It took science and social science, then art and architecture by storm in the 1950s and 60s. While it fell out of favor in the 1970s (one possible reason is Vietnam and anti-technology sentiment, noted Andrew Pickering in a conversation we had a few years ago), it’s making a resurgence today — even turning up as a contemporary topic of study.

No wonder cybernetics proved so very attractive to so many fields: it described all systems in general because all systems ultimately were cybernetic, whether they were organic, mechanical, social or aesthetic. “Any organism is held together in this action by the possession of means for the acquisition, use, retention and transmission of information,”[1] Wiener wrote, making information the raison d’être of any organism, whether a living being, built circuit or societal construct. Cybernetics’ implications extended to engineering, computer science, biology, philosophy, anthropology, art, architecture and even the organization of society—the direction of Wiener’s second book on cybernetics, The Human Use of Human Beings. One key reason for the spread was because of the Macy Conferences (1946–53), a core group that included Wiener, W. Ross Ashby and Heinz von Foerster, Gregory Bateson, Margaret Mead, John von Neumann, and Buckminster Fuller, who gathered twice a year to explore the science of feedback in the social and biological sciences. The Macy Conference attendees sought to create models of the brain and of living organisms in logical systems, linguistic and information theory and with early computers.

A black box view of cybernetics has limitations, such as Wiener’s model: first-order cybernetics—the cybernetics of observed systems. The model becomes much more interesting with second-order cybernetics. It’s a sort of meta-cybernetics: the cybernetics of observing and participating with systems.[1] Consider a thermostat. On one hand, it is a system that monitors feedback in order to adjust the system to its desired setting. However, the thermostat does not exist in isolation: a human being sets it first.[2] First-order cybernetics assumes that a system is itself a discrete thing, unadulterated by interaction with it. Enter second-order cybernetics, which states any system can be changed by its observation. It studies that the way people construct models of systems, not just how the systems themselves function and learn from themselves. Since people are cybernetic models themselves, their observations are de facto second-order cybernetic.

Where do we see these things play out?

From Making Things Public : Atmospheres of Democracy, ZKM/Center for Art and Media, Karlsruhe, 2005

Stafford Beer, a British cybernetician, applied cybernetics to business strategy Operational Research, “the science of proper control within any assembly that is treated as an organic whole.”[2] In the early 1970s, he would work with the Allende government in Chile on in order to apply his concept as a mechanism for societal control.[3] It culminated in Project Cybersyn, with the Cybersyn Opsroom that you see here. (Eden Medina has a book coming out next year about Chile and Cybersyn, an expansion of her dissertation and her article, “Designing Freedom, Regulating a Nation: Socialist Cybernetics in Allende’s Chile.”

Gordon Pask developed musical cybernetic systems that count as early cyborg hybrids. His 1953 Musicolour machine accompanied musical performers. As the performer or group played, Musicolour responded with lights and movement to the music would change, creating a sort of hypnotic effect for those who played with it. But if the performer became too repetitive and did not engage the machine enough, Musicolour would grow bored and stop responding—the first cybernetic art system to do so. [4] Pask also noted that while people trained the machine, it trained them back, creating a feedback loop in which performers felt like the machine was an extension of their minds and bodies.[5]

Left, Gordon Pask. Right, the Musicolour Machine (1953).

In 1969, Pask wrote “The Architectural Relevance of Cybernetics.” He predicted that computer-aided design tools would develop into “useful instruments;” the “machine for living in” would predict the behavior of its users and residents and engage its resident’s interest — not unlike an advanced Musicolour machine–and computers would control and change the qualities of material surfaces, using sensors to return information to the computer about the interaction.[6] He wrote:

Let us turn the design paradigm in upon itself; let us apply it to the interaction between the designer and the system he designs, rather than the interaction between the system and the people who inhabit it. The glove fits, almost perfectly in the case when the designer uses a computer as his assistant. In other words, the relation ‘controller/controlled entity’ is preserved when these omnibus words are replaced either by ‘designer/system being designed’ or by ‘systemic environment/inhabitants’ or by ‘urban plan/city’ … But notice the trick … the designer does much the same job as his system, but he operates at a higher level in the organizational hierarchy… Further, the design goal is nearly always underspecified and the ‘controller’ is no longer the authoritarian apparatus which this purely technical name brings to mind.[7]

Turning the design paradigm upon itself produces a new form of architecture. Internalizing the lessons of cybernetics externalizes the possibilities for architecture and for art to respond to the people that engage with it — as we will see with architect Cedric Price’s collaborations with Pask. (I’ve got so much to say about it, I’m in the midst of a dissertation on a number of his projects.) We’ll return to this topic in the next post here on cybernetics and cyborg architecture.

[1] Norbert Wiener, Cybernetics; or, Control and Communication in the Animal and the Machine ([Cambridge, Mass.]: Technology Press, 1948), 24.

[2] Gordon Pask, An Approach to Cybernetics (London,: Hutchinson, 1961), 15.

[3] Eden Medina, “Democratic Socialism, Cybernetic Socialism: Making the Chilean Economy Public,” in Making Things Public : Atmospheres of Democracy, ed. Bruno Latour and Peter Weibel (Cambridge, MA: MIT Press; ZKM/Center for Art and Media in Karlsruhe, 2005).

[4] Gordon Pask, “A Comment, a Case History and a Plan,” in Cybernetics, Art, and Ideas, ed. Jasia Reichardt (Greenwich, Conn.,: New York Graphic Society, 1971), 77.

[5] Ibid., 85.

[6] Gordon Pask, “The Architectural Relevance of Cybernetics,” Architectural Design 7, no. 6 (1969): 495.

[7] Ibid.: 496.

This is one of 50 posts about cyborgs, a project commemorating the use of the term.