Design in the physical and digital worlds — a brief history

Andrew Robinson
UX Collective
Published in
13 min readAug 2, 2021

--

3D Whatsapp logo, a green speech bubble with a white telephone silhouette
Photo by Alexander Shatov on Unsplash

Everything created by humans had to be designed. Design is in the tools we use, and the spaces in which we spend our time. These days User Experience Design is most commonly associated with digital products such as apps and websites, but UX began its modern history in the physical realm. Consider the telephone, an everyday device predating our digital age. The way a user interacts with a phone has greatly changed over the years, and improvements to its design took place long before the existence of smartphones. In 1927 Western Electric designed the first phone (the Model A1) to combine the transmitter and receiver into a single handset. This allowed users to hold the phone with only one hand, freeing up the other one for taking notes or accomplishing some other task. This design was so influential, that it became ubiquitous, and even today, as we have moved towards smartphones that can be carried in our pocket, it lives on in the phone icon, universal in its shape across all mobile operating systems and websites Interestingly, this highlights a bridge between the design of physical objects and their digital counterparts, a bridge that extends far beyond iconography. Often the design of physical products is intricately connected to that of digital ones. Consider how people interact with modern computers. Although touch screens and even voice recognition technologies are currently gaining in popularity, most work with personal computers is still done using a keyboard and a mouse (or trackpad). These physical objects, or hardware, are directly connected to the digital objects, or software of the computer being used. Each technology has its own design history, based on previous devices and programs that have been constantly improved into what they are today. As touchscreen smartphones have become the norm, we have even witnessed the transformation of the keyboard from physical to digital, with touchscreen keyboards now present on all smartphones. I would like to take a look at the historical development of several products, two physical and two digital, whose improvement over the years has been largely interdependent, and greatly impacted the way we store, view, and interact with information in our digital age.

The QWERTY Keyboard

Computers were not the first devices to make use of a keyboard. To get to the beginning, we need to look at an earlier device just as groundbreaking in its day, a quantum leap in user experience when it came to printing text on paper. The typewriter. Although the idea of printing text on paper using movable type was certainly not new, (this had been done ever since the invention of the printing press in the 1400s) The equipment to do so was too expensive for the average user to afford. Bringing the power of being able to quickly type documents to the masses required significant innovation, a lengthy design process.

A Hansen Writing Ball
Hansen Writing Ball — Eremeev, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons

During the 1800s numerous people worked independently on the development of what we know today as the typewriter. Early notable examples of typing devices included the 1865 Hansen Writing Ball, a half-spherical device featuring keys on the ends of metal rods that radiated outwards from the base. This fascinating machine even remained in use into the early 1900s in Europe, however, its design made it difficult for users to type very quickly. The first commercially successful typewriter was designed by American inventor Christopher Latham Sholes in 1867. Historians disagree on the exact origins of the now ubiquitous QWERTY keyboard layout, but it was ostensibly chosen to solve design problems, whether technical or otherwise. In The Design of Everyday Things, Don Norman highlights the theory that the layout was selected in order to reduce mechanical failure that would occur in early typewriters due to the close proximity of the type-bars of letters frequently typed in immediate succession, causing them to jam, while also mentioning an unconfirmed legend according to which a businessman rearranged the letters on the keyboard so that it would be possible to type the word “typewriter” using only keys located on the second row. This would have been convenient for the salesman demonstrating the device’s efficiency to prospective buyers. In either case, the decision was made to address specific needs, whether to solve usability problems (as in the machines breaking), or to aid in what we would now call marketing.

A Sholes Typewriter (1872)
Sholes Typewriter 1872 — Unknown author, Public domain, via Wikimedia Commons

For whatever reason, this layout became the default following the success of the Remington №1 Typewriter, as competing brands adopted it themselves to avoid friction for users transferring to their machines. Although it has been pointed out that the QWERTY arrangement of keys is not the most efficient when excluding mechanical considerations which are no longer applicable, keeping the same layout is actually an example of good design. As Norman notes, in some cases, “Tradition and custom coupled with the large number of people already used to an existing scheme make change difficult or even impossible.” That said, typewriters did go through many design changes up until around the 1960s. Notable improvements to the original design include the removal of the pedal in favor of a hand-controlled cartridge return, and the introduction of IBM’s Selectric in 1961, whose golfball-shaped “typeballs” enabled users to quickly change between fonts and allowed for rapid typing, remains an impressive feat of engineering.

The IBM Selectric Typewriter with a removable metal typeball
IBM Selectric Typewriter steve lodefink, CC BY 2.0 <https://creativecommons.org/licenses/by/2.0>, via Wikimedia Commons

In the end, computers have gradually replaced typewriters, as modern word processing programs offer users a faster, more efficient way to compose and edit text, though the legacy of the typewriter lives on in the keyboard layouts we still use today. The merger of the typewriter and computer represents a design process in and of itself, which marked a significant improvement in personal computers when it comes to user-centric design.

The PC

Today many computers rely on keyboards to facilitate user interaction, but this was not always the case. In the beginning, computers were behemoths, taking up entire rooms. ENIAC, the first modern computer was constructed in 1946. It produced output by punching holes into index-sized cards, which then had to be taken to a card reader for analysis.

In 1946, Bell Labs and M.I.T. released the MULTICS computer, which greatly improved user experience by featuring a video display terminal which could display text as it was typed, thus eliminating the need for punch cards and card readers.

Left: ENIAC, Right: Altair Computer System with switches
Left: ENIAC — Unknown author, Public domain, via Wikimedia Commons: https://commons.wikimedia.org/wiki/File:Eniac.jpg Right: Altair Computer System with switches: http://dunfield.classiccmp.org/s100/h/a8800.jpg

The mid 1970s saw the first small PCs for consumer use. These included the Altair s100 systems. The main improvement here was the dramatic decrease in size of the machines, as well as a price that, while still expensive, was affordable enough for the consumer market. Although these computers allowed users to enter data via a front panel with key switches, this interface was cumbersome and not as user-friendly as a full keyboard. In order to remedy this, electric typewriters could be converted for use with the computer. This would mark the beginning of the merger of the typewriter and the computer keyboard. By the late 1970s, companies like Apple, Radio Shack, and Commodore began manufacturing keyboards for their computers. This paved the way for users’ expectations that computers should come with electric, typewriter-based keyboards. Keyboard technology would continue to improve into the 1990s as computer keyboard manufacturers began replacing mechanical key switches with new membrane switches which were quieter, weighed less, and were ideal for laptops.

Left: The Apple II computer, Right: an Apple II with an external modem
Left: Apple II — FozzTexx, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons Right: Apple II with external modem — User Maury Markowitz on en.wikipedia, Copyrighted free use, via Wikimedia Commons

The computer has come a long way from its mammoth beginnings, and now we have a variety of laptops as well as tablets and even smartphones to choose from. But just as the keyboard remained, a remnant from the days of the typewriter preserved to allow users to easily transition into using a new product, so too has it persisted, even surviving onto touch screen devices which have done away with the physical keyboard entirely.

The iPhone keyboard displayed in the Notes app.
An iPhone Keyboard in Apple Notes

Operating System — CLI to GUI

Naturally computers are about much more than hardware. Just as improvements in the design of their physical bodies have allowed users to better interact with them, so too have improvements in the design of their software. Two of the most important digital products that exist today are the operating system and the web browser.

In the beginning, computers didn’t have operating systems at all. The early electronic computers of the 1940s had to be programmed one bit at a time by means of mechanical switches. By the 50s, computers could still only execute one task at a time, but could read pre-written programs. Users had full control of the machines but had to feed their program data, usually on punched paper or tape, directly into the machine. The computer would start the program and stop when it had either completed or crashed. Soon users were provided with libraries of support code that could be linked to their programs to extend functionality.

An IBM punched card
IBM punched card — Pete Birkinshaw from Manchester, UK, CC BY 2.0 <https://creativecommons.org/licenses/by/2.0>, via Wikimedia Commons

These humble beginnings were the start of modern operating systems. Users at this point were compelled to write all the code, requiring an extremely high level of technical knowledge, which was beyond the average person.

The operating system saw its first major improvement to user experience in the late 1970s with the introduction of disk operating systems (DOS) and the command-line interface. These user interfaces displayed text on an external display and allowed users to type commands into the computer via a keyboard the results of which could be read immediately on the screen. The UNIX operating system, developed by Bell Labs starting in 1969 and released in the 70s was the first multi-tasking and multi-user functionality operating system. Today’s Linux, MacOS, Android, iOS, and Chrome OS operating systems are descendants of UNIX. The CLI was the most common way for users to communicate with computers throughout the 70s and 80s, and was used with many early operating systems such as UNIX, MS-DOS, and Apple-DOS. The main advantage to this model was that users no longer had to write programs and manually enter them into the computer, but could instead draw from a database of existing commands.

Bourne shell Interaction on Version 7 Unix. A Terminal window with white text on a black screen.
Bourne shell Interaction on Version 7 Unix — Huihermit CC0, via Wikimedia Commons — https://commons.wikimedia.org/wiki/File:Version_7_UNIX_SIMH_PDP11_Kernels_Shell.png

The drawback, however, was that the CLI still wasn’t very user-friendly. Users had to familiarise themselves with a veritable dictionary of commands and options, which proved to be a significant barrier of entry to many. But the next great improvement in the UX design of computers was coming, and just as the physical object of the keyboard, having its origins in the typewriter had facilitated an improved digital user experience with the CLI, the mouse would do the same, ushering in perhaps the most significant improvement in the operating system in history, the graphical user interface or GUI.

In 1973, Xerox PARC released the Xerox Alto, the first computer to use a GUI as its main interface. Building on previous work by researchers led by Douglas Engelbart, who developed the mouse in the late 1960s, this groundbreaking device introduced graphical elements still used across modern desktop operating systems to this day, including menus, radio buttons, and check boxes, but never reached commercial production.

Over the next several years, development of GUIs continued and the first commercially available models were released, including the PERQ workstation, and the Xerox Star. The Apple Lisa released in 1983 introduced the concept of the menu bar and window controls. All of these features would survive to be incorporated into modern operating systems, including Microsoft Windows, which would become the most popular desktop operating system, and still enjoys a market share of around 76% today.

Left: Xerox Star Promotional Poster showing the Xerox Star personal computer, a graph can be seen in the open window on the desktop. The text reads: Now you can create documents with words and pictures. Right: a Xerox Alto Mouse.
Left: Xerox Star Promotional Poster via Wikimedia Commons https://en.wikipedia.org/wiki/Xerox_Star#/media File:Rank_Xerox_8010+40_brochure_front.jpg Right: Xerox Alto Mouse — Judson McCranie, CC BY-SA 3.0 <https://creativecommons.org/licenses/by-sa/3.0>, via Wikimedia Commons

The GUI was a massive leap in the improvement of user experience design in the field of computing. Allowing users to see and select files on the screen by means of a pointer device (the mouse) along with displaying possible commands as menu items on drop-down menus was an intuitive solution that allowed users to interact with computers without knowledge of commands required to use a command-line interface. These decisions are rooted in design fundamentals. The way information is stored in computers is closely connected with the principles of information architecture rooted in library science. The modern GUI presents users with a graphical, digital representation of real-world (and conveniently named) physical office objects, such as “desktop” and “file”, named for their analogue counterparts. Documents are usually represented by icons resembling a printed page. Icons almost always correspond to their physical counterparts. These design concepts streamline facilitate a smooth transition for users from the physical to the digital office space by building on their preexisting knowledge.

A graphical user interface with icons and windows (GEM 1.1 Desktop). Various file icons drawn with simple black lines are shown inside white windows on a blue desktop background.
A graphical user interface with icons and windows (GEM 1.1 Desktop)https://upload.wikimedia.org/wikipedia/commons/6/6b/Gem_11_Desktop.png

Age of Internet — The Web Browser

As great improvements were being made to computers, the idea that they cold be connected in a communication network was of particular interest to governments and universities. In 1969 the first message was sent over ARPANET, a networking project linking research universities in the United States. Over the next 20 years, new networks were established but the internet remained solely accessible to researchers, students and private corporations.

In 1990 the first web server and graphical web browser, WorldWideWeb was created at CERN, the European Organization for Nuclear Research, in Switzerland, and in 1993 the first popular web browser, Mosaic was created by Marc Andreessen at the University of Illinois Urbana-Champaign. It ran on Windows computers meaning that anyone with a PC could access the internet.

Left: WorldWideWeb — Tim Berners-Lee for CERN, the original browser, gray with white text boxes and a panel on the right side with various icons. Right: Netscape Navigator, showing a primitive web page.
Left: WorldWideWeb — Tim Berners-Lee for CERN, Public domain, via Wikimedia Commons: https://commons.wikimedia.org/wiki/File:WorldWideWeb_FSF_GNU.png Right: Netscape Navigator — https://en.wikipedia.org/wiki/Netscape_Navigator#/media/File:Navigator_1-22.png

In 1994, the Netscape Navigator browser was released to the public, becoming wildly successful. By 1995 Microsoft had released Internet Explorer and begun competing with Netscape to introduce new technologies. When Microsoft began including IE with its Windows operating system, it quickly gained market share and had reached a whopping 99% by 1999. Netscape founded the not-for-profit company Mozilla which released their Firefox browser, an open-source alternative to IE in 2002. During the late 90s and early 2000s, other companies released their own web browsers, notably Apple with Safari and Google with Chrome, which is now the most popular web browser in the world.

The Wikipedia homepage resized from full screen (left) to mobile-size (right). The text wraps so that mobile version displays a narrower column.
The Wikipedia homepage resized from full screen (left) to mobile-size (right) Wikipedia, CC BY-SA 3.0 <https://creativecommons.org/licenses/by-sa/3.0>, via Wikimedia Commons

One of the greatest improvements to user experience in the web browser was the move from static web pages to dynamic ones. In the mid 90s, web pages were still static, meaning that user interaction was limited to viewing text and images. All that changed when Netscape created and released the JavaScript programming language, which allowed programs to be written for and run by the web browser. This opened the door for designers and developers to create entirely new and improved web-based user experiences featuring responsive interfaces. For example, when interacting with a static web page, any user interaction, such as a button-push had required a new page to be loaded. When interacting with dynamic interfaces on the other hand, a user could now push a button and the browser could display a dialogue box or change the appearance of the page without opening a new one. Users could be alerted when performing actions, and the page could be adjusted based on user actions such as resizing a window, scrolling, or hovering over an object with the cursor.

A JavaScript alert box displaying the text: Are you sure you want to use this function on your website? An unchecked box below the text is labeld: stop executing scripts on this page.
A dialogue box

Probably the most significant change in web browsing in recent history has been the rise of the search engine. Today it is hard to imagine the internet without the ability to conduct a quick Google search. Google wasn’t the first search engine that allowed users to search the web, but it beat out earlier competitors with its page ranking system which allowed users to receive better results. In general, the idea of being able to search through an index of existing web pages is a giant leap from users only being able to access known addresses. Modern web browsers allow users to type a query into a search bar or directly into the url bar in order to run a search with their desired search engine, ensuring that the information they are seeking is literally only a few clicks away. The development of web technologies is continuing at an astonishing rate. Web browsers have gone from simply a way to access pages, to entire operating systems that can run a wide variety of programs. It seems that the future of computing will be inextricable from that of the web.

A web browser with the Google hompage open. The blue, red, yellow and green Google logo sits above a gray searchbox on a white screen.
The Google Chrome web browser with Google search — https://en.wikipedia.org/wiki/Google_Chrome#/media/File:Google_Chrome_on_Windows_10_screenshot.png

Sources

The UX Collective donates US$1 for each article we publish. This story contributed to World-Class Designer School: a college-level, tuition-free design school focused on preparing young and talented African designers for the local and international digital product market. Build the design community you believe in.

--

--