At 11/13/00, Matthias Mueller-Prove wrote:Hi Alan, (…) kind regards, From: Alan KayDate: Mon, 13 Nov 2000 08:07:34 -0800 Matthias -- Yes, my PhD thesis is one of those (like Butler Lampson's) that crept back into the shadows. In fact, I don't even have a copy. But all PhD theses in the US are registered with a company that used to be called "University Microfilms" in Ann Arbor Michegan. I think it has been bought and sold several times since then, and may have changed its name, but I'm sure it is still functioning and has a web page. A simpler way to find out some of what the thesis was about would be to find a book called "History of Programming Languages II (ca 1996) published by Addison-Wesley. In there is a lengthy chapter I wrote giving the main essentials of the "Early History of Smalltalk", and included are pictures and descriptions of my thesis work. An excellent reference to some of the graphics and user interface practices of the 60s is the first edition of the book, "Interactive Computer Graphics" by William Newman and Bob Sproull. Back to my thesis: It was about the design (ca 1967-9) of a desktop computer called the Flex machine that had a tablet and a calligraphic display. It had the parallel invention of multiple clipping windows (the other parallel line was contributed by Ivan Sutherland, Bob Sproull, Danny Cohen, etc.) Most of the "innovative" UI design I did that was connected with personal computing was done a few years later at Xerox PARC, where the still current ideas of a bit-map screen, overlapping windows, and icons, etc., were first developed. Cheers, Alan UMI Companyf.k.a. University Microfilms http://www.umi.com The Reactive Engine. Kay, Alan Curtis, PHD. The University of Utah, 1969. 336 pp. Alan Kay at Library of the University of Utah University of Hannover 70-03,806 University of Hamburg M 420 |
|
At 4/2/01, Matthias Mueller-Prove wrote:Hi Alan, just a short update on our thesis. I found microfishes and -films and put some excerpts online. [The Reactive Engine and FLEX] till soon, At 8:38 Uhr -0800 02.04.2001, Alan Kay wrote:Matthias -- It's almost frightening to see these again (like bringing "The Mummy" back to life in a horror picture!). I must confess that, while I do remember doing the work and writing these theses, I don't remember their contents anymore. All of this I thought of provisional stuff to be thrown away until a few years into the Xerox PARC experience. Cheers, Alan From: Alan KayDate: Thu, 5 Apr 2001 07:10:15 -0800
First, I can't remember if you told me that you had read "The Early History of Smalltalk" (in History of Programming Languages II, Addison-Wesley, ca. 1995-6). This has a good historical account of the influences for many of these technologies. Both of the tracks you mention were combined in the ARPA research projects, and also at Xerox PARC. One of the very first projects at PARC initially had the title "Implementing NLS On A Minicomputer". Bill Duvall, one of the NLSers at SRI came over to PARC and did a fabulous version on a NOVA with our first bit-map display. All of the Smalltalks at PARC had hyperlinks, not just between "content", but between "projects" (the GUI there was not just the first overlapping window interface, it also had what we would call today "multiple desktops" that were connected via hyperlinks.) BTW, there was really no connection between the Xanadu stuff and NLS, except maybe from NLS to Xanadu. Also, NLS was a real system, the people in the project used it 24 hours a day for everything they did. Xanadu had many good seeming ideas but not much of it was usable back then. Also, it is worth giving several other ARPA projects in the 60s much more credit for foundations of GUI. For example, RAND corp invented (in 1961) JOSS, the first great enduser language, (in 1964) a great input tablet, (in 66) a recognizer better than Grafitti, and (in 67-9) a wonderful interactive pen-based system called GRAIL. Because of the friendly collaborative nature of ARPA, and the fact that almost everyone at PARC came from ARPA, there was a tremendous amount of cross fertilization of ideas, to the point where it is difficult to pin down their actual history. What went wrong is as simple as why the early commercial PCs were shipped without a network and a GUI in spite of the fact that a completely integrated system had been done at PARC many years before. Namely, ARPA and PARC had a grand vision of what personal computing should be like, and most hobbiests and commercial companies had either a very limited vision, or no vision at all.
There are various writings to be found on the Net. You might also want to look at From: Alan KayDate: Fri, 6 Apr 2001 07:08:58 -0800
The hypertext quality of Engelbart was not as important as the general ideas of hyperlinking. Quite a bit of thought was put into this and implemented in many different systems. In particular, the late bound systems at PARC (especially Smalltalk) already had every object hyperlinked automatically.
I think that some people at PARC really liked the Engelbart style of document, and others were more interested in inventing various forms of Desk Top Publishing. As I mentioned, Smalltalk could link anything dynamically, so we didn't worry about it. We used it when it was useful.
This was intentional. It actually was the vehicle for several design centers, each of which was somewhat consistent.
Actually, LISP had a pretty well thought out set of stuff (by Warren Teitelman and Bob Sproull, and implemented by Bob Sproull).
You were looking at the techie version of Squeak. Check out www.squeakland.org in a week or two to see "Squeak for the rest of us". |
|
From: Alan KayDate: Fri, 11 Dec 1998 08:51:02 -0800 At 5:09 PM -0000 12/11/98, Jonathan A. Smith wrote:
Well, I claim that I haven't escalated the idea since its real incubation in the early days of PARC (heh heh -- others might disagree ...). But just having the right looking and acting HW doesn't come close, because the Dynabook was always a "user relationship" idea. One of the titles of an early paper was "A Dynamic Medium for Creative Thought", and the main analogy was always to art and literature (especially of the scientific type). In another early paper, I called the computer a "metamedium" since its content was dynamic descriptions of media. The most important new powerful idea that the computer brought to art and literature (and civilization) was the ability to dynamically simulate descriptions of ideas as opposed to just stating them. This could be the basis for completely new set of end-user and human to human relationships with "powerful ideas" that would be as world changing as the analogous new properties brought by the printing press and the eventual incredible changes in world view and how we describe and argue about ideas. So, we'll know if we have the first Dynabook if we can make the end-user experience one of "reading and writing" about "powerful ideas" in a dynamic form, and to do this in such a way that large percentages of the bell-curve can learn how to do this. When Martin Luther was in jail and contemplating how to get the Bible directly to the "end-users" he first thought about what it would take to teach Latin to most Germans. Then he thought about the problems of translating the Bible to German. Both were difficult prospects: the latter because Germany was a collection of provinces with regional dialects, and the dialects were mostly set up for village transactions and court intrigues. Interestingly, Luther chose to "fix up" German by restructuring it to be able to handle philosophical and religious discourse. He reasoned that it would be easier to start with something that was somewhat familiar to Germans who could then be elevated, as opposed to starting with the very different and unfamiliar form of Latin. (Not the least consideration here is that Latin was seen as the language of those in power and with education, and would partly seem unattainable to many e.g. farmers, etc.) I think Martin Luther was one of the earliest great User Interface designers -- because he understood that you have to do much more than provide function to get large numbers of people to get fluent. You should always try to start with where the end-users are and then help them grow and change. So, the Dynabook problem is not just how to get the computer to simulate media -- especially those that only the computer can do -- but to have the "literature" of how this is specified *seem to be learnable* (and then, in fact, be learnable). (There are many deep considerations about the forms that will really do the job that are beyond the scope of this reply -- and I don't have time to get into all the issues right now -- but the critical part is that symbolic descriptions are required to synergize with those that can be dealt with by hand and eye. One way to appreciate this is to think about the difficulty of doing Tom Paine's argument against monarchy and for democracy by using stained glass windows! It is hard for many people to understand that it is the very difficulty of symbolic ways of rendering and knowing -- and the surmounting of these difficulties -- that makes the difference between traditional societies and the society that we live in. i.e. we aren't an oral society with a writing system tacked on, but we think qualitatively differently about the world -- this is what McLuhan meant by "the medium is the message": our representation and idea systems are not linear to ideas, but changes allow previously unthinkable things to now be thinkable.) Another of the many Dynabook goals has to do with another analogy to language: that children learning English are also learning the language of Shakespeare and Bertrand Russell. The difference is in years of experience about the world and its ideas, and in the architectural structuring of English to handle powerful ideas as well as mundane ones. If e.g. Squeak can show a continuity from authoring environments that 5 year-olds can use up to those that Dan Ingalls wants to use without changing language (but perhaps with different scopes and safeguards), then part of the Dynabook vision will have been realized. (i.e. Adults are pretty hopeless, and real changes come when children are introduced to new paradigms early in life.) Another "i.e." is that things work best when they can be used for both mundane and serious purposes (imagine only being able to use language when you had something important you wanted to talk about -- JIT doesn't work for ideas!) And, there also has to be a literature, not just a language. Over the next several years, we have to get at least 1000 dynamic examples of "21st century content" on the net. (BTW, these don't have to be in Squeak to be useful, but one of the reasons we built Squeak was so that *we* at least could control our own SW destiny to realize all these Dynabook goals.) In calendar '99 we will be asking people interested in creating the Dynabook to help make this content (and a lot of these people are currently on the Squeak mailing list...) This is why our project has been going on for many years. At the recent 30th anniversary celebration of Engelbart's great demo of hyperlinking ++++, the writer Denise Caruso (who only recently found out about what Doug Engelbart was trying to do and did do) said that the thing that surprised her about the last 15 years was just how little most people were willing to settle for compared to what Engelbart and others saw could be done. I think a similar remark could be applied to our project. We aren't willing to settle for less than what can really be done, and it has just taken quite a while to amass all the ideas and tools that are needed. I think you can see from the above that "Squeak in a Notebook" doesn't equal a Dynabook -- but the thing that really excites me from head to toe is that Squeak is now comprehensive enough to *make* the original conception of the Dynabook over the next few years. Then we can just stuff that in the most reasonable HW of that time (or indeed just make the HW that is required). forwarded to me by Marcus Denker |