An Interview with Computing Pioneer Alan Kay

Born in 1940, computer scientist Alan Curtis Kay is one of a handful of visionaries most responsible for the concepts which have propelled personal computing forward over the past thirty years.

  • Share
  • Read Later
Alan Kay

Kids using Dynabooks, in a drawing from Alan Kay's 1972 paper "A Personal Computer for Children of All Ages"

Born in 1940, computer scientist Alan Curtis Kay is one of a handful of visionaries most responsible for the concepts which have propelled personal computing forward over the past thirty years — and surely the most quotable one.

He’s the man who said that “The best way to predict the future is to invent it” and that “Technology is anything that wasn’t around when you were born” and that “If you don’t fail at least 90 percent of the time, you’re not aiming high enough.” And when I first saw Microsoft’s Surface tablet last June, a Kay maxim helped me understand it: “People who are really serious about software should make their own hardware.”

[image] Alan Kay

Viewpoints Research Institute

Above all, however, Kay is known for the Dynabook — his decades-old vision of a portable suite of hardware, software, programming tools and services which would add up to the ultimate creative environment for kids of all ages. Every modern portable computer reflects elements of the Dynabook concept — the One Laptop Per Child project’s XO above all others — and yet none of them have fully realized the concept which Kay was writing about in the early 1970s.

Actually, Kay says that some gadgets with superficial Dynabook-like qualities, such as the iPad, have not only failed to realize the Dynabook dream, but have in some senses betrayed it. That’s one of the points he makes in this interview, conducted by computer historian David Greelish, proprietor of the Classic Computing Blog and organizer of this month’s Vintage Computer Festival Southeast in Atlanta. (The Festival will feature a pop-up Apple museum featuring Xerox’s groundbreaking Alto workstation, which Kay worked on, as well as devices which deeply reflected his influence, including the Lisa, the original Macintosh and the Newton.)

Kay and Greelish also discuss Kay’s experiences at some of the big outfits where he’s worked, including Xerox’s fabled PARC labs, Apple, Disney and HP. Today, Kay continues his research about children and technology at his own organization, the Viewpoints Research Institute.

–Harry McCracken

David Greelish: Do you agree that we now essentially have the Dynabook, as expressed in the three tiers of modern personal computing; the notebook, tablet and smartphone? If not, what critical features do you see missing from these? Have they delivered on the promise of improving education?

Alan Kay: I have been asked versions of this question for the last twenty years or so. Ninety-five percent of the Dynabook idea was a “service conception,” and five percent had to do with physical forms, of which only one — the slim notebook — is generally in the public view. (The other two were an extrapolated version of Ivan Sutherland’s head mounted display, and an extrapolated version of Nicholas Negroponte’s ideas about ubiquitous computers embedded and networked everywhere.)

[image] Dynabook

Alan Kay

A Dynabook, as depicted in Kay’s 1972 paper

In order to talk about the service idea, I generally just stick with the minimum that had to be delivered (even though a great hope back in the ’60s was that AI would progress enough to allow “helpful agents” — as in [pioneering computer scientist John] McCarthy’s “Advice Taker” — to be a pillar of the user-interface experience). We invented the overlapping window, icons, etc., graphical-user interface at PARC and just concentrated on it when it became clear that the “helpful agent” wasn’t going to show up in the decade of the ’70s (and still hasn’t).

The interesting thing about this question is that it is quite clear from the several early papers that it was an ancillary point for the Dynabook to be able to simulate all existing media in an editable/authorable form in a highly portable networked (including wireless) form. The main point was for it to be able to qualitatively extend the notions of “reading, writing, sharing, publishing, etc. of ideas” literacy to include the “computer reading, writing, sharing, publishing of ideas” that is the computer’s special province.

For all media, the original intent was “symmetric authoring and consuming”.

Isn’t it crystal clear that this last and most important service is quite lacking in today’s computing for the general public? Apple with the iPad and iPhone goes even further and does not allow children to download an Etoy made by another child somewhere in the world. This could not be farther from the original intentions of the entire ARPA-IPTO/PARC community in the ’60s and ’70s.

Apple’s reasons for this are mostly bogus, and to the extent that security is an issue, what is insecure are the OSes supplied by the vendors (and the insecurities are the result of their own bad practices — they are not necessary).

Do our modern personal computing devices augment education? Have they lived up to what was foreseen in the past? Are they really helping teachers teach in the classroom?

The perspective on this is first to ask whether the current educational practices are even using books in a powerful and educative way. Or even to ask whether the classroom process without any special media at all is educative.

I would say, to a distressing extent, the answer is “no.”

The education establishment in the U.S. has generally treated the computer (a) first as undesirable and shunned it, (b) as sort of like a typewriter, (c) not as a cheap but less legible textbook with smaller pages, etc. (d) as something for AP testing, (e) has not ventured into what is special about computing with reference to modeling ideas and helping to think about them.

This in spite of pioneers such as Seymour Papert explaining both in general (and quite a bit specifically) just what it is and how it can revolutionize education.

I’ve used the analogy of what would happen if you put a piano in every classroom. If there is no other context, you will get a “chopsticks” culture, and maybe even a pop culture. And this is pretty much what is happening.

In other words, “the music is not in the piano”.

What do you think about the trend that these devices are becoming purely communication and social tools? What do you see as good or bad about that? Is current technology improving or harming the social skills of children and especially teens? How about adults?

Social thinking requires very exacting thresholds to be powerful. For example, we’ve had social thinking for 200,000 years and hardly anything happened that could be considered progress over most of that time. This is because what is most pervasive about social thinking is “how to get along and mutually cope.” Modern science was only invented 400 years ago, and it is a good example of what social thinking can do with a high threshold. Science requires a society because even people who are trying to be good thinkers love their own thoughts and theories — much of the debugging has to be done by others. But the whole system has to rise above our genetic approaches to being social to much more principled methods in order to make social thinking work.

By contrast, it is not a huge exaggeration to point out that electronic media over the last 100+ years have actually removed some of day to day needs for reading and writing, and have allowed much of the civilized world to lapse back into oral societal forms (and this is not a good thing at all for systems that require most of the citizenry to think in modern forms).

For most people, what is going on is quite harmful.

In traditional personal computing (desktops & laptops) the graphical user interface/desktop paradigm is now well established at 20+ years, having become dominant sometime after the Apple Macintosh with Microsoft’s Windows 3.1 in 1992. Do you see this changing anytime soon? What might replace it? Or will these types of computers always use this type of interface for the foreseeable future?

The current day UIs derived from the PARC-GUI [the interface developed in the 1970s by Kay and his colleagues at Xerox’s Palo Alto Research Center] have many flaws, including those that were in the PARC-GUI in the first place. In addition, there have been backslidings — for example, even though multitouch is a good idea (pioneered by Nicholas Negroponte’s ARCH-MAC group [a predecessor of MIT’s Media Lab] in the late ’70s), much of the iPad UI is very poor in a myriad of ways.

[image] Xerox Alto

Courtesy Lonnie Mimms

Xerox’s Alto workstation, the 1973 system, co-created by Kay, which profoundly influenced the Macintosh and Windows

There are some elements of the PARC-style GUI that are likely to stick around even if undergoing a few facelifts. For example, we generally want to view and edit more than one kind of scene at the same time — this could be as simple as combining pictures and text in the same glimpse, or to deal with more than one kind of task, or to compare different perspectives of the same model. Pointing and dragging are likely to stick, because they are simple extensions of hands and fingers. One would hope that “modeless” would stick, though there are many more modes now than in the original PARC and Mac interfaces. “Undo” should stick (for obvious reasons), but it is very weakly present in the iPad, etc.

There is also the QWERTY phenomenon, where a good or bad idea becomes really bad and sticks because it is ingrained in usage. There are many examples of this in today’s interfaces.

There is the desire of a consumer society to have no learning curves. This tends to result in very dumbed-down products that are easy to get started on, but are generally worthless and/or debilitating. We can contrast this with technologies that do have learning curves, but pay off well and allow users to become experts (for example, musical instruments, writing, bicycles, etc. and to a lesser extent automobiles). [Douglas] Engelbart’s interface required some learning but it paid off with speed of giving commands and efficiency in navigation and editing. People objected, and laughed when Doug told them that users of the future would spend many hours a day at their screens and they should have extremely efficient UIs they could learn to be skilled in.

[youtube=http://www.youtube.com/watch?v=JfIgzSoTMOs]

[Douglas Engelbart, inventor of the mouse, demonstrates his user interface in 1968]

There is the general desire of people to be change adverse — “people love change except for the change part” — this includes the QWERTY and no-learning-curve ideas.

Part of the motivation for the PARC GUI came from our desire to have a universal display screen which could display anything — this led to the bitmap screen. One drawback of these screens and the screens today is that the visual angle of the display (about 40°) is much narrower than the human visual field (which is about 135° vertically and 160° horizontally for each eye). This is critical because most of the acuity of an eye is in the fovea (~1-2°) but the rest of the retina has some acuity and is very responsive to changes (which cause the eye to swing to bring the fovea on the change).

Head mounted displays can have extremely wide fields of view, and when these appear (they will resemble lightweight glasses), they will allow a rather different notion of UI — note that huge fields of view through glasses will help both 2-1/2 D and 3D graphics, and the UIs that go along with them.

This suggests many new design ideas for future GUIs, and they will slowly happen.

You were an Apple Fellow at Apple [in the 1980s] while John Sculley was CEO and when the video of the Knowledge Navigator was released. How much influence did you have on that set of ideas and the video? How involved were you with the Newton?

John has recounted this in his book and website. I suggest you look at his version. He asked me to come up with “a modern version of the Dynabook” (which was pretty funny, since we still didn’t have a Dynabook). I contributed ideas from a variety of sources, including myself, Negroponte, AI, etc. The production team was really good. Doris Mitch and Hugh Dubberly did the heavy lifting. Michael Markman was the ringmaster (and quite a remarkable person and thinker). We did a few more of these concept videos for John after the success of the KN video.

[youtube=http://www.youtube.com/watch?v=QRH8eimU_20]

[John Sculley’s 1987 “Knowledge Navigator” future-vision video]

I had many grazing encounters with the Newton (this was a very complicated project and politics on all fronts). Back in the Dynabook design days I had determined pretty carefully that, while you could do a very good character recognizer (the GRAIL project at RAND had one in the ’60s), you still needed a keyboard. Apple Marketing did not want a keyboard because they feared it would then compete with the Mac. Then there was the siren’s song of trying to recognize handwriting rather than printing — and they plunged (this was a terrible decision). And so on and so forth. One of the heroes of the Newton was [PARC and Mac veteran] Larry Tesler who took over the project at the end and made it happen.

Is the realization of an intelligent software or user agent the key to the end of the desktop metaphor in desktop and laptop computing? Artificial Intelligence (AI) did not progress anywhere near as fast as many people had thought in the last 40+ years, so at the rate it’s developing, when might we have AI like what the Knowledge Navigator showed? (Like 1966’s Star Trek, even?)

Having an intelligent secretary does not get rid of the need to read, write, and draw, etc. In a well functioning world, tools and agents are complementary. Most progress in research comes when funding is wise and good. That has not been the case for 30 years or so. AI is a difficult problem, but solvable in important ways. It took 12+ years of funding to create personal computing and pervasive networking, and this only happened because there was a wise and good funder (ARPA-IPTO). If we include commercialization, this took a little more than 20 years (from 1962 to 1984 when the Mac appeared).

It’s important to realize that no one knew how difficult a problem this was, but it was seen as doable and the funder hung in there. It’s likely that “good AI” is a 15-20 year problem at this point. But the only way to find out is to set up a national effort and hang in there with top people.

You were both an Apple Fellow in the Advanced Technology Group at Apple Computer and a Disney Fellow at Walt Disney Imagineering. Can you comment about the similarities and differences in the culture of the two companies?

I’ve been a Fellow in a number of companies: Xerox, Apple, Disney, HP. There are certain similarities because all the Fellows programs were derived from IBM’s, which itself was derived from the MIT “Institute Professor” program. Basically: autonomy, a stipend large enough to start projects without permission, option to be a lone wolf or run a group or be in a group, access to upper management to give advice whether solicited or not, etc.

All public companies are faced with dealing with the market and their stockholders, and the deadly three-month assessment. How they deal with these issues is somewhat different. Also the kind of business a company is in often affects its style (though marketing and finance people are rather similar no matter what a company is doing). The most different of all these companies in its dynamics and style was the “show biz” company Disney, under Michael Eisner during my five years there.

However, Xerox PARC was the most different of all of the experiences, because the research itself there had been protected by [PARC Computer Science Laboratory founder] Bob Taylor especially for the first five years. So this was mostly idyllic and I think we were all the most productive we’d ever been over all of our careers, past, present and future (at least I was). All the other companies — including the rest of Xerox — had much less effective ideas about research and how it should be done and who should do it.

I should say that I had always loved [Disney theme-park design and engineering organization] Imagineering and the Imagineers, and had known a number of them over the years as well as some of [Disney’s] original “9 Old Men” animators (such as Frank Thomas). Disney had two basic tribes, both at extremes: “the creatives” and “the suits”. It was a thrill to work with the extreme that was “the creatives”, there were lots of them and they could do anything and loved to do anything. I don’t know how to say anything evenhanded about the other tribe.

As far as Apple goes, it was a different company every few years from the time I joined in 1984. There was Steve [Jobs] — an elemental force — and then there was no Steve. There was John [Sculley]. He was pretty good, but the company grew so fast and started getting very dysfunctional. And then on downhill.

One way to think of all of these organizations is to realize that if they require a charismatic leader who will shoot people in the knees when needed, then the corporate organization and process is a failure. It means no group can come up with a good decision and make it stick just because it is a good idea. All the companies I’ve worked for have this deep problem of devolving to something like the hunting and gathering cultures of 100,000 years ago. If businesses could find a way to invent “agriculture” we could put the world back together and all would prosper.

What comments do you have on how the decentralization of computing seems to be heading back towards centralization with personal (modern) computing? Is the cloud over-hyped?

There was always a “cloud” in the ARPA view of things — this is why we invented the networks we did. The jury is still out on whether the ways in which what is presented as a new idea will actually be a good manifestation of the pretty obvious synergies between local and global computing.

David Greelish has studied computer history and collected old computers for over 20 years now. He is a computer historian, writer, podcaster and speaker. He was the founder of the original Historical Computer Society, publisher of the zine Historically Brewed and is currently the founder of the Atlanta Historical Computing Society. He has published all of his computer history zines along with his own story in the book, The Complete Historically Brewed. He is currently the director of the Vintage Computer Festival Southeast 1.0 being held the weekend of April 20-21 in the greater Atlanta area.