Where did this interaction come from? — a brief history of interaction design

This story is part of an unpublished pamphlet co-authored by Pouyan Bizeh & me on Situated Technologies as a trans-discipline of Design, technology, and art written back in 2016. I have used some materials from that pamphlet for this story to elaborate on some of the key origins of the notions of interaction [design]. This probably can help fellows in HCI, IxD, UX, and product design to understand the origins of their passion/profession.

Mahan Mehrvarz
UX Collective

--

Notion of Cybernetics

It no longer seems likely that the carrying out of a “purposeful act,” like picking up a book on the table, is a simple one-way process in which the appropriate part of the brain dictates to the appropriate muscle, by means of neurons, what action they must act in order to bring about the desired goal. Rather, in any “system”(combination of components acting together to perform a specific objective), each purposeful act involves a circular process in which at each stage the information is fed back to the central nervous system to initiate the next move recognized as “state of system“, and this procedure goes on and on until the originally desired goal has been achieved. This feature which is both associated with the living creatures and some man-made machines is known as “feedback”.

This is the notion of “Cybernetics”; which is created by Norbert Wiener and corresponds to the study of control and communication systems. Cybernetics and feedback are integrated notions in the sense that one can conclude that any given system with the capability to generate and consequently study constant feedback, is using a Cybernetic approach that enables it to adjust to unpredictable changes. Stafford Beer moves words beautifully from systems to cybernetics, saying:

“When I say that any system is in control, I mean that it is ultra-stable: capable of adapting smoothly to unpredicted changes. It has within its structure a proper deployment of requisite variety.”

A system is called “static” if its present output depends only on its present input. On the other hand, the system is “dynamic” when its present output depends on its past input. “In a dynamic system, the output changes with time if the system is not in a state of equilibrium.” Cybernetics allows dynamic systems to self-regulate and self-correct without any end-state or definite predetermined goal.

Most famous Norbert Wiener’s portrait with blackboard as the bakcground
Norbert Wiener (source)

Cybernetics and Interactivity

The embryonic notions of interactive art began with the Artist’s objectives to share their former authoritative position not only with the audience but with the machinery as well.

Marcell Duchamp’s 1920’s Rotary Glass Plates is one of the initial steps toward interactive art. In 1938, Duchamp also tried to eliminate the paintings with a light that would only switch itself on when visitors activated a light sensor.

Rotary Glass Plates

Before 1960s, the work of some Dada artists had been the initial steps toward the tradition of interactive art. Max Ernst put an ax beside his sculpture to be used by the visitors “in case they did not like the object”. Some Dada painters also invited the audiences to complete the incomprehensible Dada drawings or paintings on the free space which was intentionally left empty.

A Dual Origin: Computer Science and Art

There are two different categories of origins to interactive art; one is the development path of participatory art forms like performances, happenings, and site-specific work. The other is the technology-oriented method of artist/computer scientists such as Myron Krueger and David Rokobey as well as video artists such as Nam June Paik.

Many roots of interactive art can be recognized in the 1960s from the annihilation of the barriers between life and art, the “dematerialization of the art object”(an idea in conceptual art), process art (actual doing and how actions can be defined as an actual work of art; seeing the art as a pure human expression), participation art (Participatory art is an approach to making art in which the audience is engaged directly in the creative process, allowing them to become co-authors, editors, and observers of the work), and the Fluxus movement (an international and interdisciplinary group of artists, composers, designers and poets that took shape in the 1960s and 1970s), to the Happening movement (a form of performance art in streets, garages, and shops as opposed to the general exclusive approach of art galleries and exhibitions) and Situationism, Art and Technology, kinetic art, and cybernetic art. They are part of a process that had a profound impact on the relationship between artwork and its audience.

In 1960, Joseph Carl Robnett Licklider, with an unusual background, in both engineering and behavioral science, introduced the concept of man-computer symbiosis­­­­ as a cooperative interaction between men and electronic machines in the early 1960s. He suggested man-computer symbiosis as opposed to humanly extended or semi-automatic systems in which machines are only mechanical (or computational) extensions of men and envisioned the aim of this (fully automatic) systems, mainly to “enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs.

In 1961, Allan Kaprow defined “Happenings” as a form of [performance] art in streets, garages, and shops as opposed to the general exclusive approach of art galleries and exhibitions. Simultaneously with the Happenings, reactive kinetic art evolved, replacing instructions given by the leader of the Happening with technically communicated and preprogrammed participation.

Happening

In the early 1960s, Nicolas Schoffer created the series of “CYSP” (Cybernetic-Spatiodynamic) sculptures capable of responding to changes in sound, light intensity, color, and movement, of the audience. CYSP sculptures were the main instances of the cybernetic art movement. In 1965, Schoffer also presented the plans for a cybernetic city at the Jewish Museum in New York demonstrating that for Schaffer the ability to program not only sculptures but the whole urban area offers the idea of a dialogue between technology and environment.

Image of Cybernetic-Spatiodynamic sculpture by Nicolas Schoffer
Cybernetic-Spatiodynamic sculpture

In 1966, a series of performances under the title “Nine Evenings: Theater and Engineeringhas been held in New York. In one of the performance Variations of “Nine Evenings”, John Cage and Merce Cunningham employed a sound system that reacted via photoelectric cells and microphones to sounds and the movements of dancers. Cage used the wireless system for switching on and off loudspeakers, which reacted to movement via photocells. In Variations VII, Cage also used contact microphones, making body functions that normally cannot be heard-like heartbeat and noises from the stomach and lungs-audible.

In 1968, Robert Rauschenberg who has been a key to the art and technology movement developed a visual reactive environment that involved the non-specialist, unprepared visitor titled Soundings. Soundings consisted of three sheets of plexiglass, placed one after another. The front sheet had a mirror, and the two smaller sheets presented different silkscreened views of a chair. If visitors kept quiet in the exhibition space, they would only see their mirror images. But as soon as somebody spoke or made a noise, lights were activated that made different views of the chair visible.

While Happenings imply a stage situation and were limited to a specific performance time, reactive environments took place within the exhibition situation in galleries and museums.

In 1969, in the first video group exhibition; TV as a Creative Medium, in New York’s Howard Wise Gallery: Participation TV (I & II) by Nam June Paik were another attempt among other reactive environment projects. In Participation TV I, visitors produced sounds using two microphones and subsequently effects of the sound waves on the monitor could be watched. In Participation TV II, three-color monitors and three cameras aimed at each other resulted in endless visual feedback. If visitors stepped between the camera and monitor their images appeared on the monitor and among the endless visual feedback.

Myron Krueger, also was avant-garde and pioneer who continued influential steps towards a participatory spatial interactive art. His interactive art exhibitions (proposing responsive environment); GLOWFLOW (An environmental exhibition in which lines of lights could glow based on the participants’ movements in the exhibition space in 1969), METAPLAY ( A digital screen through which the live video image of the viewer and a computer graphic image remotely drawn by an artist were superimposing on this screen in 1970), PHYSIC SPACE (an environment dominated by a program which automatically would respond to the footsteps of people entering the room with electronic sound 1971), and VIDEOPLACE (a video screen allowing participants to interact from separate locations in a common visual experience in unexpected ways through the video medium in 1975) can be considered the grand stone of a type of spatial interactive art in which the use of computer algorithms plays a key role. He has also developed the theoretical frameworks for what others and himself have been doing for almost a decade describing the responsive environment as a form of art:

“The [responsive] environments described suggest a new art medium based on a commitment to real-time interaction between men and machines. The medium is comprised of sensing, display and control systems. It accepts inputs from or about the participant and then outputs in a way he can recognize as corresponding to his behavior. The relationship between inputs and outputs is arbitrary and variable, allowing the artist to intervene between the participant’s action and the results perceived.”

Krueger believed that the audience of responsive environment has to be actively involved in shaping his surroundings. The participant is equipped to express himself in new ways by the new performative affordances which have been given to his limbs. He does not simply admire the work of art, instead, he has to deal with the moment in its own terms and consequently co-create a unique spatial-temporal experience.

Krueger’s Sketches

The major difference between interactive art tradition, reactive or responsive environment, and most happenings with a theatrical participatory theme was the leadership of artist in the participatory art forms versus the leadership of events (or machinery) in the interactive art projects. Soke Dinkla considered a socio-political layer in this form of art:

“The artistic material of interactive art is the automatized dialogue between program and user.” Interactive artworks provide a critical analysis of the automatized communication that is replacing inter-human relationships in more and more social fields. Thus the distribution of power between user and system is not just a technological issue but a social and political one as well.”

Interaction in the context of Space and Environment

Speaking of space and cybernetics indeed requires mentioning a landmark project by Cedric Price in early 1960s called the “Fun Palace”. In addition to the incorporation of basic laws of cybernetics, Price created a unique synthesis of a wide range of contemporary discourses of his time, such as information technology, game theory, and Situationism, to produce a new kind of “improvisational” architecture. The Fun Palace began as the collaboration between Price as an architect who valued the “inevitability of change, chance, and indeterminacy” of a human environment, and the avant-garde theater producer Joan Littlewood who dreamt of a kind of theater where people could experience the “transcendence and transformation” of the theater not as audience but as players. Fun Palace would have no singular program and could adapt its form to the “ever-changing and unpredictable”, ad-hoc program that would be determined by the users. In Fun Palace, in contrast to the conventional practice of architecture, the architect typically stated problems in terms of permittivity, that is, in terms of events rather than of objects.

A sketch of the Fun Palace project
Fun Palace by Cedric Price

When the approach of the Fun Palace gradually shifted from theatrical ideas toward cybernetics, the project planners placed more importance on mathematical models based on statistics, psychology, and sociology. Later on, Gordon Pask, participated in the project as the head of cybernetic committee. Even Price hoped that computer programs would relocate the movable walls and walkways to adapt the layout of Fun Palace according to changes in use. The Fun Palace was never completed. Though unbuilt, the Fun Palace was widely admired and imitated, especially by the young architecture students who formed the core of the avant-garde Archigram group.

The Illustration by Archigram called: Plug-in City
Plug-in city by Archigram

Archigram was a magazine, backed by a group of architects and designers, published in nine issues during the 1960s. The name of the magazine is a hybrid of the words “Architecture” and “Telegram”. Each issue was also a hybrid that crossed between structure and communication. The magazine is now considered as a reaction to the emergence of “electronically driven technologies within the popular domain of consumer products and services.” Archigram provided images ranging from system design to cybernetic planning.

The illustration by Archigram titled: The Walking City
Walking city by Archigram

Their most notable works included Plug-In City and Walking City. In Plug-In City Peter Cook proposed a city that consists of a permanent infrastructure and circulation network with temporary spaces and services that could be added to or removed from it. The proposal addressed the urban problems such as population growth, traffic, and land use by considering the whole city as a system. Herron’s walking City , Consists of a giant walking structure, potentially for a post-nuclear war human settlement. These structures would be able to connect to one another or to a network of circulation infrastructure in order to exchange passengers/dwellers and goods.

Mark Weiser coined the term “Ubiquitous Computing“ in 1988. Using the example of writing as the first information technology that stores spoken language for long run, he described that the “literacy technology” products have a constant presence in the background. While they do not require an active attention, “the information to be transmitted is ready for use in a glance”.

Weiser recognized the Silicon-based technologies (of the time) far from this concept. He proposed that the ubiquitous computers are constantly running invisible, none-intrusive, and on the background of everyday life and woven into its fabric. It is important to understand that the locus of this concept is that with network-connected devices, information will be available everywhere because people do not put information on their devices, instead their devices will be put on a network of information. He emphasizes that the power of this concept does not come from any one of these devices, but the intersection of many of them.

Ubiquitous computing takes the social layer of human environments into account. later on, emerged the design of embedded (as opposed to only portable), location-aware, situated (as opposed to universal), and adapted (as opposed to uniform) systems.

Malcolm McCullough in his book digital ground states:

“When most of objects boot up and link to networks designers have to understand the landscape of technology enough in order to take a position about the design of them.”

A major contribution of ubiquitous computing was the changes it introduced to computer interfaces. Malcolm McCullough suggests that Ubiquitous computing is far from a portable or mobile form of computing since it is embedded in the spaces we live in. He advocates for a new pervasive, location aware computing to replace the existing desktop computer. This new computing “emerge[s] on the assumption that what you need, and with whom you wish to be connected at the moment, is based on where you are.”

McCullough also proposes the elements through which this new form of computing can be made possible. These elements consist of microprocessors, sensors for detecting the action, communication links between the devices, tags to identify the actors, and actuators to close the feedback loop. He also suggests controllers, displayers, location tracking devices, and software components to complete the set of components needed for pervasive computing.

In the 1970s, Nickolas Negroponte spoke of various aspects of the arousing discourse of designer-machine dialogue and its several byproducts that had emerged during late 1960s and early 1970s in the field of architecture and urbanism such as; “flexible”, “adaptive”, ”reactive”, ”responsive”, and ”manipulative” [styles or approaches to architecture]. His project “SEEK” was a manifesto exhibition/installation that initiated the notion of digital assemblage in architecture. He considered a boundary between two types of interaction; one is passive and “manipulative” which is “moved as opposed to move” and in contrast, the other one is responsive in which the environment takes an active role as a result of a computational process. Negroponte went far beyond the simple feedback loop in what is conventionally known as a control system. His responsive architecture moves toward artificial intelligence in the sense that it has intentions and contextualized cognition with the capability of dynamically changing its goals. In his book; “Soft Architecture Machines”, Negroponte proposes a model of architecture without architects. He puts architecture machines beyond some aids in the process of designing buildings. Instead, in his view they serve as buildings themselves; intelligent machines or cognitive physical environments that respond to their inhabitants’ immediate needs and wishes.

SEEK installation at MIT architecture machine group

Microcomputers and Democratization of Interaction Design

Making interactive [art] projects, prototyping the design of technological products, and technological embedded systems such as what McCullough suggested had required hardcore electronic and engineering skills. Utilizing the simplest technologies such as a simple control mechanism, a sensor, or an electronic motor, artists and designers either had to buy a consumer version (if available) that lets them control the system in the desired way, hire an engineer, or invest time and money into learning the skills required to research and develop a solution themselves.

This barrier, however, was overcome in two steps in the first decade of the twenty-first century; the first was in 2001, when Processing ­ — an open-source computer programming language and integrated development environment (IDE) — was released for electronic art, new media art, and visual design communities. The second was in 2005, when the Arduino— an open-source electronic platform (microcontroller) developed at the Interaction Design Institute Ivrea in Italy — came to the market with the goal of creating a low-cost, simple platform for non-engineers, and at the time, for art students who wanted to create interactive electronic art projects.

The cover of a documentary film titled: Arduino The Documentary
watch the documentary

The Arduino soon became a tool for artists and designers and found its way to the art museums and galleries. The growing popularity both in the mainstream and museums reveals that artists and designers embrace this new potential as a tool for their art projects. Processing and Arduino both individually and even in mixed conditions have boosted the path of interactive art, design and to many extents, architecture and urbanism. The two platforms have initiated a path which was followed and supported by a number of similar hardware/software platforms such as Raspberry Pi boards, Intel Galileo Boards, BeagleBoards, openFrameworks, and Pure Data.

Analytical statistics from search engines queries shows a significate interest in these platforms. In 2009, the term Arduino is found in 1.9 million websites. The Boolean operation ‘Arduino and Design’ pulled up 613,000 sites, and ‘Arduino and Art’ had 603,000.

These platforms are used by many user groups simultaneously resulting in the overlap of several areas of studies. The possibilities of these new tools and platforms which have been traditionally in the hands of engineers, computer programmers are now accessible for artists, interaction designers, educators, etc. These various groups are constantly working, and sharing their codes, materials, and techniques. The byproduct of these revolutionary products democratized the interaction design tradition and boosted up a new and accessible field for artists and designers.

Interaction And Experience Design Are Similar But Different

The key is to distinguish between the roots of interaction (interaction design and interactive art) and user experience. Although there are many similarities between the two notions (field of art and/or design), the main difference is that experience design has been a topic for a long time, while interaction design can be considered a less-than-a-century-old concept.

The notion of interaction design is shaped by various artists and computer scientists. It can be realized as the entanglement of computer science and art. User experience, however, was an important topic since the history of modern architecture and industrial design. User experience puts users at the center and focuses on ways to solve their problems, while interaction design focuses on questioning the authority of the creator/designer and encourages the significance of systems.

Finally, in 1995, Don Norman coined the term “User Experience” following all activities that companies like Toyota and Apple Computers were engaged in and ideas of scholars like Henry Dreyfuss.

And UX

Nowadays with the rise of digital products, “user experience design” has emerged specifically in the form of “UX design”. UX is a controversial topic in today’s design landscape. Many assume interaction design as a part of UX design. However, these are mostly just popular opinions of well-known product/UX designers or design agencies regardless of the actual history of interaction design and its path of development (see IDF for instance).

I believe UX design is a form of experience design that resembles the digitality of a given product. Interaction design has been born almost through digital technology and has less meaning without it. For me, when people utilize the term UX design, they aim to emphasize the interaction design aspect of user/customer experience (user/customer experience from a human-computer interaction perspective). User experience does not necessarily require the use of [digital] technology, however in interaction design, technology is more than a key player. For example, while designing a hammer, user experience still is a valid concern. on the flip side, except some very initial examples of interactive art, the presence of digital technology gives meaning to the notion of interaction itself. One might say that there is always some sort of [digital] interaction design when people think of UX, while designers often use the term “user experience design” in a more general and inclusive way about any given product regardless of its technological properties.

References:

Norbert Wiener and R. B. Lindsay, “Cybernetics,” American Journal of Physics 17, no. 4 (1949).

Norbert Wiener and Duncan Library of Robert, The Human Use of Human Beings: Cybernetics and Society (Garden City, N.Y U6 Doubleday, 1954); ibid.

Stafford Beer, Designing Freedom, vol. 13th ser. (Toronto U6 -: Canadian Broadcasting Corp, 1974).

W Ross Ashby, “Requisite Variety and Its Implications for the Control of Complex Systems,” in Facets of Systems Science (Springer, 1991).

Söke Dinkla, “From Participation to Interaction: Toward the Origins of Interactive Art,” Clicking in: Hot links to a digital culture (1996).

Dore Ashton, “An Interview with Marcel Duchamp,” Studio International 171, no. 878 (1966).

Philip Beesley and Omar Khan, Responsive Architecture/Performing Instruments (Architectural League of New York, 2009); Mathews.

Hadas A Steiner, Beyond Archigram: The Structure of Circulation (Routledge, 2013).

Michael Kirby, Happenings: An Illustrated Anthology (EP Dutton, 1965).

Joseph CR Licklider, “Man-Computer Symbiosis,” IRE transactions on human factors in electronics, no. 1 (1960).

Nicolas Schöffer, La Ville Cybernétique (Tchou Paris, 1969).

All Davis et al., “Art and the Future” (paper presented at the MIT Artificial Intelligence Laboratory Annual Abstract, 1973).

Billy Klüver and Julie Martin, “Four Difficult Pieces,” (1991).

Myron W Krueger, “Responsive Environments” (paper presented at the Proceedings of the June 13–16, 1977, national computer conference, 1977).

Gary Masters, “History of Computers: Courtesy of Microsoft Encarta,” https://www.utdallas.edu/~ivor/cs1315/history.html.

Mark Weiser, “The Computer for the 21st Century,” Scientific american 265, no. 3 (1991).

”Ubiquitous Computing” (paper presented at the ACM Conference on Computer Science, 1994).

Donald A Norman, The Invisible Computer: Why Good Products Can Fail, the Personal Computer Is So Complex, and Information Appliances Are the Solution (MIT press, 1998).

Malcolm McCullough, Digital Ground: Architecture, Pervasive Computing, and Environmental Knowing (The MIT Press, 2005).

Adam Greenfield and Mark Shepard, Urban Computing and Its Discontents (Architectural League of New York, 2007).

The UX Collective donates US$1 for each article we publish. This story contributed to World-Class Designer School: a college-level, tuition-free design school focused on preparing young and talented African designers for the local and international digital product market. Build the design community you believe in.

--

--