The 401st Blow :: Thoughts On Media

The Evolution Of Interaction

Posted in Technology, Theory by Noah Harlan on January 30, 2010

I guess today is my day for following up on earlier posts… Last week I posted about claim chowder, and in particular the following assertion by PC World’s Bill Snyder about the impending Apple tablet announcement:

[If] you run a small business and want to avoid wasting money and brain cells on superfluous technology, forget about the iSlate or whatever Apple is going to call its tablet computing device. It’s going to be too expensive, it does things you don’t need to do, and it will add a messy layer of complication to your company’s computing infrastructure.

Sure, the tablet we expect Apple to launch on January 27 will probably have more than its share of cool factor. But do you want to spend $1,000 or so for bragging rights? For that price, you could buy two perfectly serviceable Windows netbooks, four iPhones, or–if you want to go the Apple route–cover most of the cost of a 13-inch MacBook Pro, getting proven technology that’s useful right out of the box.

So, was it more claim or more chowder?

Now everyone under the sun has chimed in with their thoughts on the device but indulge me, and allow me to share some thoughts about why this device is more important than you may think.

The iPad changes the way we relate to media and content.

The ENIAC - 1946

In the earliest days of computing we related to technology and the media contained in that technology (why we call it ‘content’ remember) symbolically. Toggle switches would be flipped, lights would turn on and off and a code would be returned to us to decypher. This was, in a way, much like the ancient abacus. It had the potential to process certain linear tasks more efficiently than our own minds and pointed to the possibility of processing power to extend our abilities, to prove out our theories, and to make tangible our imagination. We gained greater efficiency within this structure – cards contain series of input commands replaced step-by-step manual manipulation & early printers could print results that could be studied later – but we remained at a distance from the machine and from the content. We had to adopt the language of the machine, even if it was a language we created.

By the early 1960’s, scientists had merged the teletype technology of the time with cathode ray tube displays and used them as an input/output system for computing. This moved our interaction with computers from primitively symbolic to  linguistic. We could enter commands in language and the computer could send responses in language. We moved into a dialectical relationship to our technology. We could describe what we wanted to have happen and the computer, in response, could describe what happened. It lacked the ability to represent, unless what it was representing was language. That was fine, even amazing, except that it separated the computer from whole parts of the human experience. It was a device for the sciences. Explain a problem and it will explain the answer.

Then, in the 1970’s, researchers began experimenting with new ways to represent information and control your interaction with computers. Out of this research emerged the general purpose Graphical User Interface (GUI) created by the legendary Xerox Parc laboratory. (Yes, for those of you who don’t know the history of computing, it’s that Xerox company. Xerox was also responsible for creating the mouse, the WYSIWYG editor, bitmaps, object oriented programming and ethernet. But yeah, the copier company…) In 1981, Xerox released the Star  (aka: the Xerox 8010 Information System), a computer that used as it’s primary form of interaction a non-textual visual representation of content which you interacted with visually and manually (pointing and clicking). It was this system that a young Steve Jobs and Steve Wosniak saw at the Xerox Parc labs and took them from the early Apple computers to the launch of the single most groundbreaking machine in the history of personal computing. The Macintosh.

The Macintosh

The Mac became wildly successful and was the blueprint for Microsoft’s Windows operating system. Over the following 2-1/2 decades, we related to our content primarily in a visual and mechanical way. However, we remained abstracted from our content and the information contained in our devices. The mouse became second nature, but it was always second nature. Think of all the times you struggled with mouse pads, the mouseball became jammed with lint, the cord got snagged on the keyboard, the button wouldn’t click, or clicked too easily, the line you were dragging wasn’t precisely where you wanted to go. These frustrations were all mechanical in nature and they kept us as a distance from our content. We came up with solutions to each (the mouseball and trackpad disappeared with infrared, the cord vanished with trackpads and bluetooth) but we remained abstracted.

The first commercial touchscreens emerged in the early 1980’s. These primitive devices relied either on infrared grids positioned around the edge of a screen – a solution that was not truly a touchscreen as the sensing device was not part of the screen itself- or on resistive touchscreens which were based on pressure sensitive pads – when you touch a spot on the screen a slight gap between two layers is closed and the screen registers that touch. Both of these systems are, again, mechanical as one requires you to physically block rays of light and the other requires you to physically manipulate a circuit into closing. But then came capacitative touchscreens.

Capacitative touchscreens have a light charge running over the surface and when your finger comes in contact with that surface the electrical conductivity of your body creates a capacitor and the device can respond to that. You are now no longer mechanically manipulating and instead you are gesturing.

This move to the gestural changes how we interact with content. The last physical bridges between us and content in our devices begin to crumble. Gestural control is intuitive, it is fluid and it is no longer needing interpretation. I pinch and it shrinks. I drag and it follows. I tap and it zooms. I remember standing in the Apple store in Soho after the iPhone was first launched and watching people playing with the device. It was a fascinating experience and I made two observations:

1) People who walked in off the street and started playing with it smiled. And I mean smiled immediately. The device amazed them. It was fun to make gestures and to see a computer respond. To make gestures and watch your content respond. When I was in elementary school my parents made my brother take a typing class. He would go to a teacher, then he would come home, put blank stickers on his typewriter and practice touch-typing until he had mastered the skill. In order to have the most basic interaction with a computer he had to learn a new way of communicating and he would then have to channel his thoughts into that communication methodology. With the gestural computing of the iPhone (and a big part of this is the iPhone OS, Apple figured out how to build an user interface to match the technology) you no longer needed to be taught, as you already knew.

2) It was the first technology device I have ever seen that seemed to equally amuse and entertain men and women. Men thought it was cool and women thought it was cute. Why? Because it was comfortable for both. We do think differently, we view the world differently and we interact with the world differently. But we innate knowledge, reinforced by our entire lifetime of experience, as to how to gesture. Yesterday I visited a friend’s house. His one-year-old daughter was in his arms when he answered the door and, when she saw me, she reached out her hand and pressed on my nose. I laughed, she laughed and then she pressed again. She had learned a gesture. She won’t learn to type for years. Which way of communicating will be more intuitive to her…

The iPad - Gestural Computing

The iPad is the gestural idea brought to the next step. It is large enough that we don’t feel constrained and limited – when using an iPhone you have a perpetual sense that you’re watching part of something larger (the rest of the email is off the screen, the rest of the web site is off the screen) and the device must, at it’s core, serve as a phone so it’s operating system gives primacy to it’s phone features. The iPad breaks down those limitations and opens up more surface for our content and more space for our gestures. We now can interact with all our content through direct motion and movement, no longer moving proxy instruments like keys and mice. This proximity to our media makes it more personal and frees our methods of expression.

I played music for a long time and studied jazz through high school and into college. I listened to interviews with great jazz musicians and they would talk about practicing and practicing until the instrument became instinctive. Until they no longer thought about the note they wanted to play but, rather, they thought of the note and the sound came out of the instrument. Musicians at the top of their abilities talk about the instrument becoming part of their body and, when improvising, reaching the point where they can turn off their brain and just connect their soul to their hands. Few people ever have the talent and spend the time to reach that point. The point where they no longer have an idea, think about how to execute it and then execute it but, instead, think and perform. Gestural computing lowers the bar to proficiency. It removes the conscious thought of technique from the act of creation. It opens a whole new world and this week we moved one step further towards it.

Tagged with: , , ,

4 Responses

Subscribe to comments with RSS.

  1. learning styles said, on January 31, 2010 at 10:42 pm

    great thoughts- terrific. I would like to see notions of neuro-linguistics integrated/analyzed or simply the ideas of how some people are more visual, others more kinesthetic (physical-based) and yet others more aural. i am sure people who interact / process /learn more kinesthetically tune into these devices better than, say, aural-leaning people. i personally do not like touch screens and am waiting for intuitive devices! 🙂

  2. Noah said, on January 31, 2010 at 10:46 pm

    I suspect intuitive is next on the horizon. It’s the last stage in the evolution and somewhere Donna Haraway will be very happy…

  3. harriette said, on January 31, 2010 at 10:54 pm

    Noah, cool to think. I am also curious what you DO think if kinesthetic/visual/aural processing styles wrt all these touch screen devices if anything? … and/or will evolution select for kinesthetic (/visual?) processing thus? 🙂 (I was once drawn to that history of consciousness program as I am sure many were…).

    twitter.com/filmworkshops
    http://www.yahrfilms.com

  4. Noah said, on February 1, 2010 at 11:06 am

    I think that, by breaking down the barriers between intention and expression, a device like this has the flexibility to respond to each style in a fashion. If you music and sound are how you process the world, you still need a way to receive and generate and those devices exist on a continuum. Singing->Wind Instrument->Piano->Electric Keyboard->Computer Interface… No matter what your processing style is, you have to have a physical interface with the world on some level and if there is anything mechanical at the point of interface then this type of gestural computing reduces the barrier to communication. I suspect there will be fascinating apps for children with Autism that could help them express & communicate in ways that are denied to them by traditional (spoken & written) language.


Leave a comment