The history of Palm itself most certainly doesn’t extend as far back as the 19th century, as most of you will know. The company was founded in 1992 by Jeff Hawkins, joined by Donna Dubinsky and Ed Colligan, and those of you with a proper sense of history will probably know that, with a bit of effort, you could stretch Palm’s history a bit further back to the late 1980s. At that time, Hawkins worked at GRiD, where he created the GRiDPad, one of the first tablet computers and the Palm Pilot’s direct predecessor.
To understand Palm’s history, you have to understand Hawkins’ history. To understand Hawkins’ history, you have to look at the technology that was at the very core of the Palm Pilot: handwriting recognition. This technology most certainly wasn’t new when Hawkins started working on it, and as early as 1888, scientists and inventors were already working on the subject – in one way or another.
Before we move on, it’s important to make a few distinctions in order to make clear what I mean by “handwriting recognition”. First and foremost, there’s the distinction between printed character recognition and handwritten character recognition, which I’m assuming is obvious. What’s less obvious, perhaps, is that handwritten character recognition further breaks down in online and offline handwritten character recognition. The latter refers to – simply put – scanning handwritten characters and recognising them as such; this is used extensively by postal services to scan handwritten addresses.
With online handwritten character recognition, characters are recognised as they are written. You could do this in a variety of ways, but the one most of us are familiar with is using a stylus on a resistive touchscreen. However, it can also be done on a capacitive touchscreen, a graphics tablet, or possibly even a camera (I don’t know of any examples, but it seems possible). This is the kind of handwriting recognition this article refers to.
First steps
Having said that, the history of handwriting recognition starts in the late 19th century. There were systems which may look like they employ handwriting recognition, the most prominent of which is probably the telautograph. This mechanical device was invented and patented by Elisha Gray – yes, that one – in 1888, and converted handwriting or drawings into electrical impulses using potentiometers, which were then sent to a receiving station, which recreated the handwriting or drawing using servomechanisms and a pen. As ingenious as this system is, it isn’t handwriting recognition, because nothing’s actually being recognised.
In 1914, a system was invented that is considered to be the first instance of handwritten character recognition. Hyman Eli Goldberg invented and patented his ‘Controller’, a device that converted handwritten numerical characters into electrical data which would in turn instruct a machine in real-time.
It’s quite ingenious. I’m no expert in reading patent applications, and the older-style English and technical writing don’t help, but the way I understand it, it’s simple and clever at the same time. Characters are written using an electrically conductive ink. A ‘contactor’, consisting of six groups of five ‘terminals’ (so, six digits can be written) is then applied to the written ink. The electrically conductive ink of a character will connect the five terminals in a specific way, which creates circuits; in which way these terminals are connected by the ink depends on the shape of the character, thus creating various different circuits (see the below image). These different currents then give different instructions to the machine that’s being controlled.
Neither of these systems employed a computer, so we’re still a way off from handwriting recognition as we know it today. In addition, there were more systems – more and less advanced than what I’ve already described – but I’m not going to describe them all; the point I’m trying to make is that the idea of trying to control a machine using handwriting is an old idea indeed, with implementations dating back to the 19th century.
Now let’s jump ahead to the late ’50s and early ’60s, and bring computing into the mix.
Stylator
Before we actually do so, we should consider what is needed to operate a computer using handwriting. It seems simple enough, but consider what computers looked like during those days, and it becomes obvious that a lot had to be done before we arrived at handwriting recognition on a computer. An input device was needed, a display to show the results, a powerful computer, and the intricate software to glue it all together.
The input device was the first part to come. In 1957, Tom Dimond unveiled his Stylator invention, in a detailed article titled “Devices for reading handwritten characters“. Stylater is a contraction of stylus and interpreter or translator, which should be a clear indication of what we’re looking at: a graphic tablet with a stylus.
Stylator’s basic concept isn’t all that different from Goldberg’s Controller. However, it improves upon it in several crucial ways, the most important of which is that instead of connecting terminal dots with conductive ink to create circuits, you’re using a stylus to draw across a plastic surface with copper conductors embedded in it. The conductors are laid out in such a way that with just three lines consisting of seven conductors, all numerical characters can be recognised. The illustration below from Dimond’s article is pretty self-explanatory.
As you can see, writing numerals ‘around’ the two dots will ensure the characters can be recognised. When the stylus crosses one of the conductors, the conductor is energised and the combination of energised conductors corresponds to a numeral. This system allows for a far greater degree of variation in handwriting styles than the Controller did, as you can see below with the numeral ‘3’.
The two-dot system can be expanded to four dots to accommodate for all letters in the alphabet, but as you can see in the examples below, it does require a certain amount of arbitrariness in how to write the letters.
Alternatively, Dimond suggests, you can employ the sequence in which the conductors are energised to expand the two-dot system to also allow for recognising letters. It’s also important to note that the Stylator tablet required you to manually clear the character recognition buffer by tapping the stylus on a separate area because Stylator has no way of knowing when a character is completed.
Dimond lists a number of possible uses for Stylator. “Several uses have been suggested for the Stylator. It is a competitor for key sets in many applications. It has been successfully used to control a teletypewriter. It is attractive in this application because it is inexpensive and does not require a long period for learning to use a keyboard,” Dimond writes, “If the criterial areas are used to control the frequency of an oscillator, an inexpensive sending device is obtained which may be connected to a telephone set to send information to remote machines.”
There are several key takeaways from Dimond’s Stylator project, the most important of which is that it touches upon a crucial aspect of the implementation of handwriting recognition: do you create a system that tries to recognise handwriting, no matter whose handwriting it is – or, alternatively, do you ask that users learn a specific handwriting that is easier for the system to recognise? This would prove to be a question critical to Palm’s success (but it’ll be a while before we get to that!).
In the case of the former, you’re going to need very, very clever software and a very sensitive writing surface. In the case of the latter, you’re going to need very simple letters and numerals with as few strokes as possible to make it easy to learn, but the recognition software can focus on just that specific handwriting, greatly reducing its complexity. Stylator clearly opted for the latter due to hardware constraints.
The Stylator, while a huge leap forward over earlier systems, was still quite limited in what it could do. To really make handwriting recognition a valid input method, we need more. Let’s make another leap forward, and arrive at a system consisting of a graphics tablet, CRT display, recognition software, and a user interface – essentially a Palm Pilot the size of a room.
The holy GRAIL
Over the course of the 1960s, the RAND Corporation worked on something called the GRAIL Project, short for the Graphical Input Language Project. The description of the project is straightforward: “A man, using a RAND Tablet/Stylus and a CRT display, may specify and edit a computer program via flowcharts and then execute it. The system provides relevant feedback on the CRT display.” The entire project is detailed in a three-part final report, and was sponsored by the Advanced Research Projects Agency (ARPA or DARPA, it’s been renamed quite a few times) of the US Department of Defense.
The GRAIL Project was part of a larger interest in the industry at the time into human-machine interaction. GRAIL is an experiment into using a tablet and stylus to create computer programs using flowcharts – and in doing so, includes online handwriting recognition, a graphical user interface with things like resize handles, buttons, several system-wide gestures, real-time editing capabilities, and much more.
Let’s start with the RAND Tablet/Stylus. I think some of you may have heard of this one before, especially since it was often quoted in articles about the history of tablets published after the arrival and ensuing success of Apple’s iPad. The RAND tablet is a massive improvement over the Stylator, and would be used in several other projects at RAND – including GRAIL – even though it was originally a separate research project, also funded by DARPA. As was the case with many other RAND projects at the time, a detailed report on it was written, titled “The RAND Tablet: a man-machine graphical communication device“. The summary neatly details the device:
The Memorandum describes a low-cost, two-dimensional graphic input tablet and stylus developed at The RAND Corporation for conducting research on man-machine graphical communications. The tablet is a printer-circuit screen complete with printed-circuit capacitive-coupled encoders with only 40 external connections. The writing surface is a 10″×10″ area with a resolution of 100 lines per inch in both x and y. Thus, it is capable of digitizing >106 discrete locations with excellent linearity, allowing the user to “write” in a natural manner. The system does not require a computer-controlled scanning system to locate and track the stylus. Several institutions have recently installed copies of the tablet in research environments. It has been in use at RAND since September 1963.
As I already mentioned, during those times a lot of research went into improving the way humans interacted with computers. After coming to the conclusion that the then-current interaction models were suboptimal for both computer and user, scientists at RAND and elsewhere wanted to unlock the full potential of both user and computer. A number of these projects were “concerned with the design of ‘two-dimensional’ or ‘graphical’ man-computer links” (in other words, the first shoots of the graphical user interface).
From the very beginning, RAND focussed on exploring the possibilities of using “man’s existent dexterity with a free, pen-like instrument on a horizontal surface”. This focus led to the eventual creation of the RAND Tablet, which was, as we already saw in the description in the summary above, quite advanced. The technical workings are slightly beyond my comfort zone (I’m no engineer or programmer), but I believe I grasp the general gist.
The tablet consists of a sheet of Mylar with printed circuits on each of its two sides; the top circuit contains lines for the x position, while the bottom circuit contains lines for the y position. These lines are pulsed with negative and positive pulses, which are picked up by a stylus with a high input impedance. Each x and y position consists of a specific sequence of negative and positive pulses; negative pulses are zeros and positive pulse are ones, which, when combined, lead to a Gray-pulse code for each x,y position. These can then be fed into a computer where further magic happens.
This is just a basic description of how the system works, greatly simplified and based on a very simple, 8×8-line version of the RAND Tablet used in the article for explanatory purposes. There’s a lot more interesting things going on deeper in the system (such as ignoring accidental movements), and if you want to know more technical details I highly recommend reading the article – it’s quite readable.
The tablet itself was not a goal per se; it was a means to an end, with the end being to make it easier for humans to interact with computers. With this in mind, the RAND tablet would return to the forefront several years later, when RAND unveiled the GRAIL Project. At OSNews and other places, you’ve probably heard a lot about Douglas Engelbart’s NLS, the revolutionary work done at Xerox PARC, and the first commercially successful graphical user interfaces developed at Apple (the Macintosh), Commodore (AmigaOS), and Digital Research (GEM). Yet, I’ve never seen or heard anything about GRAIL, and to be honest, that’s a shame – because it’s bloody amazing.
I will provide a summary on what the GRAIL Project entails, but for those of you interested in the nitty-gritty, I advise you to read all three in-depth articles on the project (a total of 126 pages, so grab a coffee) and simply skip my summary:
- The GRAIL Project: an experiment in man-machine communications
- The GRAIL language and operations
- The GRAIL system implementation
The goal of the GRAIL Project was to develop a ‘common working surface’ for both human and computer – a CRT display. They concluded that the flexibility of the output (the CRT display) should be matched by the flexibility of the input, so that direct and natural expression on a two-dimensional surface was possible, and that’s – obviously – where the RAND Tablet comes back into play. The project had four design objectives:
- to use only the CRT and the tablet to interpret stylus movement in real-time
- to make the operations apparent
- to make the system responsive
- to make it complete as a problemsolving aid
This led them to the creation of a graphical programming language which uses flowcharts as a means for the user to instruct the computer to solve problems. The flowcharts were drawn by hand on the tablet, and would appear on the screen in real-time. Much like Ivan Sutherland’s Sketchpad, the user could draw a ‘messy’ shape (say, a rectangle), and the computer would replace it with a normalised variant. He could then manipulate these shapes (resize, move, alter) and connect them to create a flowchart. He could also write on the tablet, and have it appear on the screen – and much like the rectangle, the computer would recognise the handwritten characters, and turn them into normalised characters.
To facilitate the interactions, a dot on the display represented the position of the stylus on the tablet, and real-time ‘ink’ was drawn on the display whenever the stylus was pressed onto the tablet. The tablet surface corresponds 1:1 with the display surface. These three elements combined allowed the user to remain focussed on the display at all times – clearly an intermediary step towards modern high-end graphics tablets which combine pressure sensitive digitisers and styluses with displays.
The system also contained several elements which would return in later user interfaces, such as buttons and resize handles, and would even correct the user if he drew something ‘unacceptable’ (e.g., drawing a flow from one symbol to another if such a flow was not allowed).
Thanks to the wonder of the internet and YouTube, we can see GRAIL in action – and narrated by Alan Kay. Kay even states in the video that one of the window controls of the Mac was “literally” taken from GRAIL.
The GRAIL Project also introduced several gestures that would survive and be used for decades to come. The caret gesture was used to insert text, a scrub gesture to delete something, and so on. These gestures would later return in systems using the notebook UI paradigm, such as PenPoint OS and Newton OS.
The biggest challenge for the GRAIL Project engineers was to ensure everything happened in real-time, and that the system was responsive enough to ensure that the user felt directly in control over the work he was doing. Any significant delay would have a strong detrimental effect on the user experience (still a challenge today for touch-based devices). The researchers note that the computational costs for providing such accurate user feedback are incredibly high, and as such, that they had to implement several specialised techniques to get there.
For those that wish to know: the GRAIL Project ran on an IBM System/360 Model 40-G with two 2311 harddisks as secondary storage and a Burroughs Corp. CRT display, and the basic operating system was built from scratch specifically for GRAIL. Despite the custom nature of the project and the fact that the System/360 was available to them on an exclusive basis, the researchers note that the system became overloaded under peak demands, illustrating that the project was perhaps a bit too far ahead of its time. At the same time, they also note that areas were being investigated to distribute the processor’s load in a more evenly manner.
While those of you interested in more details and the actual workings at lower levels can dive into the three articles linked to earlier, I want to focus on one particular aspect of the GRAIL Project: its handwriting recognition. I was surprised to find just how advanced the recognition system was – it moved beyond ‘merely’ recognising handwritten characters, and allowed for a variety of gestures for text editing, as well as automatic syntax analysis to ensure the strings were valid (this is a programming environment, after all).
To get a grip on how the recognition system works, we have to step away from the GRAIL Project and look at a different research project at RAND. The GRAIL articles treat handwriting recognition rather matter-of-factly, referring to this other project, titled “Real-time recognition of handprinted text“, by Gabriel F. Groner, from 1966, as the source of their technology.
The RAND Tablet had already been developed, and now the task RAND faced was to make it possible for characters handwritten ‘on’ the tablet to be recognised by a computer so they could be used for human-machine interaction. The researchers eventually ended up with a system that could recognise the upper-case Latin alphabet, numerals, and a collection of symbols. In addition, the scrubbing gesture (for deletion) mentioned earlier was also recognised.
There were a small number of conventions the user had to adhere to in order for the character recognition software to work properly. The letter O had to be slashed to distinguish it from the numeral 0, the letter I needed serifs to distinguish it from the numeral 1, and Z had to be crossed so the system wouldn’t confuse it with the numeral 2. In addition, characters had to be written separately (so no connected script), and cursive elements like curls had to be avoided.
Already recognised text could also be edited. Any character already on the screen could be replaced simply by overwriting it (remember, the tablet and display corresponded 1:1). In addition, characters could be removed by scrubbing them out.
So, let’s get to the meat of the matter. How does the actual recognition work? The basic goal of a handwriting recognition system is fairly straightforward. You need the features which are most useful for telling one character apart from the other; features which remain fairly consistent even among variations of the same character, but differ among the various characters. In other words, you want those unique features of a character which are always the same, no matter who writes the character.
First, you need to get the actual data. As soon as the stylus is pressed on the tablet’s surface, a switch in the stylus is activated, which signals the recognition system that a stroke has been initiated. During a stroke, the recognition system is notified of the position of the stylus every 4 ms (each position is accurate to within about 0.127 mm). When the stylus is lifted off the surface of the tablet, the recognition system is notified that the stroke has ended.
The set of data points received by the recognition system is then smoothed (to reduce noise) and thinned (to remove unnecessary data points). The exact workings of smoothing and thinning are defined by a set of formulas, for which I refer you to the article (I don’t understand them anyway). In any case, the goal is to reduce the amount of processing required by reducing the number of individual data points.
The character (now represented by data points) is then analysed for features like curvature, corners, size, and several position features. The entire character is divided up into a 4×4 grid, and the features are located within any of the grid’s 16 areas. With this information in hand, the recognition system’s decision making scheme makes the call which symbol – if any – has just been written. The recognition system can handle characters consisting of multiple strokes, and it’s smart enough so that the user no longer needs to inform the system when a character has been completed.
To give you an idea of how much variation is allowed, look at the below set of strokes. Each and every one of them is recognised as a 3.
The accuracy of the system proved to be very high. The researchers asked people with zero experience with the system to sit down and use it, and they found that the average accuracy rating was 87% (I doubt I can even hit that with modern touch keyboards). People with previous experience with the system hit an average of 88%, and those that helped design the system – and, in fact, on whose handwriting the system was based – hit 93%. The researchers found that several characters proved especially problematic, such as [ vs (, or the asterisk.
The team concludes as follows:
The recognition program responds quickly and is efficient in storage. When the time-delay normally used to separate symbols is set to zero, the lifting of the pen and the display of a recognised symbol are apparently simultaneous. The recognition program – including the data analysis and decision-making routines, and data storage; but not display or editing routines – requires about twenty-four hundred 32-bit words of memory.
The system proved to be capable enough to be used in the GRAIL Project, as you could see in the video (although it was most likely refined and expanded by that point). It’s incredibly impressive to see what was possible given the limited resources they had to deal with, but if there’s one thing I’ve learnt over the years pouring over this kind of stuff, it’s that limited resources are a programmer’s best friend.
So, GRAIL was the whole nine yards – a tablet and pen operating an entire system using handwriting, shape, and gesture recognition. What’s the next step? Well, as fascinating and impressive as the GRAIL Project is, it’s ‘only’ a research project, not an actual commercial product. In other words, the next step is to see who first brought this idea to market.
And here, we run into a bit of trouble.
Going to market
We run into trouble because I can’t seem to find a lot of information about what is supposedly the first product to bring all this to market. Jean Renard Ward, a specialist and veteran in the field of pen computing, character recognition, and similar, has created a comprehensive bibliography concerning these topics, and put it online for us to peruse through.
In it, he notes that Applicon Incorporated, a company which developed, among other things, computer aided design and manufacturing systems, developed the first commercial gesture recognition system. He references one of the company’s manuals, which, as far as I can tell, is not available online. He wonders if Applicon, perhaps, uses the Ledeen character recogniser.
At first, I couldn’t find a whole lot of information on Applicon (you’d be surprised how little you can do out of the Dutch countryside). There’s a Wikipedia page for the company, but it lacks verification and citations, so I couldn’t judge the validity of the claims made. Wikipedia claims that Applicon’s products ran on PDP-11 machines from DEC, and that, much like the GRAIL Project, they used a tablet mapped to the display for input, including gesture and character recognition. However, without proper citations, it was impossible to verify.
And then, during one last ditch attempt to find something more tangible, I struck gold. David E. Weinberg has written a detailed history of CAD, titled “The Engineering Design Revolution”, which is freely available online (a 650-page treasure trove of awesome stuff). Chapter 7 deals entirely with Applicon and the company’s history, and also includes a fairly detailed description of the products it shipped.
Applicon’s early systems, built and sold in the early 1970s, were repackaged PDP-11/34 machines from DEC, which Applicon combined with its own Graphics 32 processor and called the AGS/895 Central Processing Facility. The software was written in assembly, and used a custom operating system (until DEC’s RSX-11M came out in 1987). The unique selling point here was the means by which the user interacted with the system. As described by Weinberg:
The key characteristic of Applicon’s AGS software was its pattern recognition command entry or what the company called Tablet Symbol Recognition. As an example, if the user wanted to zoom in on a specific area of a drawing, he would simply draw a circle around the area of interest with his tablet stylus and the system would regenerate the image displaying just the area of interest. A horizontal dimension was inserted by entering a dot followed by a dash while a vertical dimension line was a dot followed by a short vertical line. The underlying software was command driven and these tablet patterns simply initiated a sequence of commands. The system came with a number of predefined tablet patterns but users could create patterns to represent any specialized sequence of operations desired.
While it doesn’t specifically state that it employed handwritten character recognition, it can be inferred from the description – a letter or numeral is simply a pattern to which we arbitrarily ascribe meaning, after all. Applicon supposedly used the Ledeen gesture recogniser, which, in some literature, is actually called the Ledeen character recogniser, and is capable of handwriting recognition. I fully understand if some of you think I’m inferring too much – so I’d be very happy if someone who actually has experience with Applicon’s products to step forward and correct me.
From the 1970s and onward, multiple commercial products using handwriting recognition, styluses and tablets would enter the marketplace. Take the Pencept Penpad, for instance, a product on which Jean Renard Ward actually worked. He’s got a fascinating video and several images on his website demonstrating how it worked – basically a more compact version of the systems we just discussed like GRAIL and the AGS/895 Central Processing Facility.
As a sidenote, on that same page, you’ll also find more exotic approaches to handwriting recognition, like the Casio PF-8000-s calculator, which used a grid of rubberised buttons instead of a digitiser. Even though it has no real bearing on this article, I find the concept quite fascinating, so I wanted to mention it anyway. There’s a video of it on YouTube showing it in action.
These relatively early attempts at bringing handwriting recognition and pen input to the attention of the greater public would later catch the attention of the behemoths of computing. GO Corporation, technically a start-up but with massive amounts of funding, developed PenPoint OS, which Microsoft perceived as such a threat that after several interactions between the two companies, Redmond decided it had to enter the pen computing market as well – and so, Windows for Pen Computing was born, which brought pen computing to Windows 3.x. Apple, of course, also followed this trend with the Newton.
All these products – PenPoint OS, Windows for Pen Computing, the Newton – have one thing in common: they were commercial failures. Fascinating pieces of technology, sure, but nobody bought and/or wanted them. It wasn’t until the release of the original Palm Pilot that pen computing and handwriting recognition really took off.
Driving the point home
If you’ve come this far, you’ve already read approximately 6000 words in the form of a concise history of handwriting recognition. While this may seem strange for an article that is supposed to be about Palm, I did this to illustrate a point, a point I have repeatedly tried to make in the past – namely, that products are not invented in a bubble. Now that patents rule the industry and companies and their products have become objects of worship, there’s a growing trend among those companies and their followers to claim ownership over ideas, products, and concepts. This trend is toxic, detrimental to the industry, and hinders the progress of technology.
I’ve just written 6000 words on the history of handwriting recognition, dating back to the 19th century, to drive the point home that the Palm Pilot, while a revolutionary product that has defined the mobile computing industry and continues to form its basis until today, was not something that just sprung up out of thin air. It’s the culmination of over a hundred years of work in engineering and computing, and that fancy smartphone in your pocket is no different.
With that off my chest, let’s finally talk about Palm.
Anyone got a link to the PDF?
Seriously, I’ll read the article later when I have some time
I kinda like the experiment. Let’s get rid of the shitty ad based culture – which I don’t see anyway as I use an ad blocker, who does?? – and get back to paying people for making content.
I hope Adam gets a slice too for hosting the site.
You will get my money Thom
Edited 2013-03-11 16:16 UTC
I agree with you. I will purchase as well, I would like to support this, and the ad driven model just promotes views with incendiary headlines.
The ads on osnews aren’t annoying at all, I’ve deactivated my adblocker here.
As long as these 2 points (no annyoing ads and good content) are valid I don’t see any need for an adblocker.
I bought the PDF. I’d like to see more long article with deep insight in history, development and decisions of a company. Maybe also some (technical) interviews with developers (e.g. with Jolla / Mer people)?
I can’t disagree more. The whole idea of advertising is to brainwash you to buy something or to buy a specific brand.
Clearly this brainwashing is working else advertsing would have stopped a very long time ago…
To me it seems that every year humans become more and more a consumer, someone you can sell something to, an economic resource rather than a fellow traveller of life.
I also don’t buy into the argument that else ‘many sites would disappear as you deprive authors from an income’. Before 2000 most websites including OSNews -still is- were a labour of love, content was good, web design often bad geocity anyone??. After the commercialisation of the net, it’s often a dime a dozen.
Just take a look at technology sites: engadget, theverge, ars technica, pocket lint, boy genius report, anand, pcpro uk, wired, slashgear, etc etc.
Now surely they look slightly different and all have a slightly different angle but by large they review the same products. Often their existence is to make money from product announcements by showing (bucket) loads of ads beside the products AND to sell browsing stats to companies (the verge has EIGHT trackers listed by Ghostery).
IMHO it would be extremely welcome if we lose some if not most of these sites, rather sooner than later…
Edited 2013-03-12 13:24 UTC
I don’t use an ad blocker – if some site has irritating amount & style of ads, it’s a good reason to not visit that site.
And OSNews is not among such sites, it respects its readers (but you wouldn’t see that, running ad blocker everywhere)
Blocking ads has nothing to do with OSNews or obtrusiveness.
If you allow ads, you allow yourself to be willfully manipulated. We are manipulated enough anyway in this world.
Better to pay for the content upright rather than by ads.
PS my other post got stamped into the ground, but I did actually pay 5 euro and have countless subscriptions on Zinio.
No, not really. Subliminal messages and brainwashing with ads are in the same category as astrology, energetic bracelets, fairies and most “alternative” therapies. Starting with the fact that one of the pioneers in subliminal advertising manipulated his results, because his experiments didn’t really work the way he expected. Today, this ideas are very discredited. Yes, every now and then an article appears, that suggests that this messages may have some influence, but a very limited one. And of course, isolated studies don’t mean much.
So, no. At best this is all pseudoscience. If you want to buy into it, be my guess, but pretend it is a fact.
Sure, subliminal ads are highly dubious.For instance they tried to put a frame of a coke bottle into some frames of the movies. Too fast for the rational mind to spot but maybe it would subliminally be picked up.
I think that failed.
But we are not talking about subliminal advertising here, we are talking about ‘in your face’ advertising. There is no way the mind can ignore what it sees on the page.
That advertising works is clear: Google makes billions out of it and we haven’t stopped advertising since 1900.
If you’re so paranoid about the influence of advertising on you, you must not leave home much… (after all, ads everywhere); also, don’t watch any films (product placement).
Articles can also be manipulative (such as this one, suggesting that alternatives to WebOS are better). OTOH, ads can be informative (just the other day one brought a new mobile network to my attention, with good offer; earlier, a new band ad on last.fm)
TL; DR
Joke, didn’t read because it’s 5 dollars.
The online version is free, as always on OSNews.
I know, it’s just I rarely get to complain about this site.
OSnews sold out!
Dividing auditory in two – peasants get no picture version, while rich get premium content.
Edited 2013-03-11 15:55 UTC
I bought a copy, even though I don’t really care for dead tech companies all that much. I just like trolling this site, hence the show of support.
It would be nice if you offered a couple of full-colour preview pages, though. Especially useful since $4.99 would seem a bit much for people raised on $0.99/$1.99 mobile purchases.
I know Gumroad offers some sort of a preview feature for sellers, so maybe you should look into it.
Edited 2013-03-12 13:03 UTC
Just wanted to congratulate you, Thom, on completing this opus amoris magni. It’s inspiring to see tech histories like this that weave diligent historical research with a competent understanding of the technical side. And might I add more plainly that I really enjoyed the article. Bravo. (Looking forward to paying for the PDF!)
Yup, I haven’t had the time to read it yet, but want to thank you for the time it took to pound the monster out. I’m sure it will be interesting reading.
Thanks guys!
Thom, congrats!
If you add Maemo/Meego/Moblin to your OS lists, would be greater yet!
I just skimmed over it, but I’m surprised you didn’t mention psion
Psion and Symbian is all reserved for the next article! Collection of data has already begun – shopping for devices is the next phase.
So hops hops, buy the article .
Regarding Symbian, I advise you to have a look into
http://www.developer.nokia.com/Develop/Featured_Technologies/Symbia…
for the technical stuff.
The site is probably the only information still available online about how Symbian works.
Couldn’t care less about Palm, cry me a river PalmSource. They bought a mermaid princess and locked her away in the basement, where she eventually died. They could’ve went all Red Hat Enterprise and stuff, like making OpenBeOS platform free software and selling specialized professional applications with it, but noooooo. They had goods on the hands and they screwed the pooch. We all know the rest of this story – Apple won, Jobs was ill, but proud daddy king and had a blast destroying everyone else with machinegun. Including Palm.
A great article that brings lots of memories to life, i really enjoyed reading it. Thank’s for the excellent work you made.
Ever thought about getting a Flattr account? There are already outstanding Flattrs on your Twitter feed, that you just need harvest. I think it’s a pretty neat system and the sum of small contribution surely adds up for a great site like OSNews!
Buying before I even read the online version, definitely want to support your work Thom. Hopefully the PDF doesn’t look too small on the Kindle (if the PDF was designed for A4); could I ask for an ePub version though?
I tried to create an EPUB version, but I came to the conclusion that the format has been designed by morons. It’s basically just an HTML container, but every reader renders it in its own special way. After days and days of headaches and completely broken formatting on every reader, I gave up .
I’ve look at the format too, and it is indeed crap. In my ideal world All eBooks would be regular HTML5 and all eBook readers would use a WebKit or Gecko engine. The reason ePub breaks so much is because vendors keep munging into something custom other than good-ol-reliable-HTML5.
What tools were you using to write the article / convert it? For Word Documents, I’ve found Aspose.words to be by far the best at producing something sane and usable from MSWord.
Raw semantics are the way to do it. I’m thinking of extending ReMarkable to publish to ePub so that I can use a markdown syntax to write and ensure the simplest possible HTML output.
I used Pages. Works great for everything except EPUB, or so it would seem (I want to marry Pages – god I love that software).
Thanks for buying, by the way .
Try saving as MSWord, then run it through Aspose.Words in Windows, that’s always worked on my Kindle fine (though you will need to set the correct title / toc bookmarks, but I can help with that)
Seriously?!
… Why?
Say what?!
You are not doing much web development I imagine.
In the world of ebook readers, having a plain web-browser engine to do the rendering is *incredibly* reliable compared to the neither-a-browser-nor-a-text-layout-engine renderers that most readers use. Use of CSS on these arcane static renderers is a complete dark art.
Faire enough in that context I imagine then.
In my ideal world, eTexts would also be available in pure ASCII text-only format & all eBook readers would be able to import text files. Then you could do whatever you want with it & read it using whatever software you want.
In fact Thom, is yours available in text format? I’d love to support this kind of article, but formats like PDF are pretty useless to me. I found it a facinating read & would love to see more of this kind of thing. It doesn’t even matter if I agree with everything that’s written. The important thing is, you got it out there & it can spark a whole load of interesting discussion.
Well done!
(I can’t wait for the Psion/Symbian one)
Well, you could download the HTML source and strip it .
In all seriousness, I’ll be sure to add a .txt version to the next article’s .zip. Small effort.
What about .mobi? That’s the native file format for the Kindle anyway (except that Amazon ebooks are wrapped in DRM, but inside of this, there’s just mobi)
I tried doing direct-to-mobi conversion with MobiPocket / Calibre, but the process just didn’t function right.
I spent about five days working on a process to convert a Pages document to Kindle in a way that actually worked. Software out there is unbelievably broken. The format is broken, the converters are broken, the readers are broken.
The only process I found that actually worked was Pages (or any other format) > MSWord > Aspose.Words > ePub > KindleGen = Mobi file
FictionBook (fb2) is another option: https://en.wikipedia.org/wiki/FictionBook
This is why I’m reading OSnews.
I still have IIIxe and Centro and I used them until recently. My Centro in laying in shelf, 4th day on 0% battery and still working.
Wow. I can’t believe you claimed WebOS killed Palm. WebOS was the only BRIGHT SPOT of Palm. I owned Palm Treo’s and Pilots and I HATED THEM. When I got the first Pre and used WebOS I was in heaven. WebOS is literally the best mobile OS experience I’ve ever had and I think Steve Jobs was nervous at first. So much so they changed iOS to have kinda multitasking. They couldn’t copy the flow of WebOS because it was patented so they did their own bit that was okay but still wasn’t anything close to WebOS. Android the same issue. Why go to a dumb task manager box to select which program you go to. Having the Card minimized, but still running WAS BRILLIANT.
I wish we could see WebOS running on the newer 1gb of ram phones with dual core processors. The OS is amazing and for you to gloss over it and even claim it’s the reason Palm died is ludicrous and I hope NO ONE buys your PDF because WebOS wasn’t the problem. The hardware the OS was operating on was the problem. Sorry you were late to the game on WebOS, but if you really studied it and looked at Android and iOS. NO mobile operating system could compete with it. I would even say if you put WebOS out today on a beefier phone that didn’t crack when you dropped your bag at work it would definitely beat Windows phone and BB phone and give it time might compete with the big two.
HP didn’t know what it had. They bought it and then when they couldn’t understand what exactly to do with it they killed it. Except now it lives on LG TVs. So another reason your assumption of WebOS is CRAP is that if it was such crap then why is it still kicking around on HP Printers and now LG TVs?
Like, say, the TouchPad?
The TouchPad is no phone, but it gives a glimpse into what to expect in terms of performance. As a TouchPad owner I can assure that at least performance-wise Android 4.x is running circles around WebOS.
Sadly, I agree that Android (CM9) boots and runs great on my Touchpad, where WebOS was a bit slow and clunky.
I still have a soft spot for WebOS though, even if I don’t currently use it. The card UI paradigm was awesome and far easier to use for task switching and management IMO.
Thom didn’t claim that. Read again: “… it is my view that Palm was already long dead before webOS ever even arrived on the scene.”
For me webOS was a real gem, so I was a bit disappointed to read that it failed to impress Thom. But on the other hand we had a different mobile history and Thom’s view was invaluable on the history of mobile computing.
You are right on the spot. webOS had great potential, but had some delusional masters. In my point of view it was a victim of great mismanagement, which unfortunately doomed it forever. I hope I’m wrong, but I don’t see a bright future for the open source version of webOS.
You are right about PalmOS, but wrong that WP or iOS copy it. They don’t have the inter-app comm paths or the rest. I’m not sure about WP but iOS is on the left side of the sweet spot. Crippling is not simplicity. They still don’t get the Zen of Palm – you must allow anything which enhances the experience.
You didn’t mention the Kindle, but were it opened it would be at the sweet spot.
Yeah. PalmOS is “there you have it”, where iOS is “there you go”.
Palm OS 5 did support fat binaries, almost all applications with ARM support were. I think you had to get a special compiler from ARM themselves to make ARM-only applications. For CodeWarrior or GCC, you wrote a 68k application and linked in resources of ARM code (known as “ARMlets”). These had full access to the ARM CPU, but calling back to PalmOS was awkward and all the UI code was still done with 68k code.
Sorry to hear that webOS didn’t impress Thom. It can certainly be sluggish, the browser sucks (I could also lay those charges at my Nexus 7, which is oddly frustrating to use at times despite excellent hardware), and I have to carry around spare batteries etc, but I’m keeping my Pre 3 until I find something, anything, that comes close to offering me the same experience. I adore the thing. Never having owned a Palm OS anything, I can’t compare, but – after coming late to the webOS party – I have the same feeling of regret and grief you feel for BeOS. BB10 may come close to filling the void, but there’s something characterful about Palm’s flawed little OS that can’t be reduced down to whiz, or features, or specs, or even the card ui; using it just makes me happy.
Edited 2013-03-11 20:59 UTC
Here’s my ‘review’:
* For those reading via PDF, a _lot_ more images would have been appreciated as 1) hyperlinks were not working on my Kindle and 2) without tabbed-browsing using hyperlinks are painful on an e-reader
* For the PDF, an A5 format would have been better. Most e-readers are small form-factor and scaling from A4 to A5 just makes the text hard to read. I had to read in landscape mode
* An ePub / Mobi format would have been preferable to some. Whilst very difficult to do, it is kindest to embed the text of external content as footnotes in an eBook, so that offline reading is possible and the hyperlinked text in the main article can be followed and its context learnt
* Reading as a book, I actually felt as if the article was too short! I was enjoying it very much and felt it was all over a bit too soon. If you intend to do “offline” versions (i.e. PDFs) of articles in the future, please bear in mind the use-cases of such offline content and expand your writing to state what you are subtly inferring behind your choice of hyperlinks, which are easy to understand on a desktop device, but much harder to grok between-the-lines on a mobile or e-reader device
* The various model names brought back a lot of memories. I used to work at a consumer electronics superstore in 2003, a period when the hardware was getting pretty powerful (300+MHz ARMs) and WinMo was edging past Palm thanks to HPs range of devices. It was a time where it was just becoming obvious at the consumer end that the PDA and the phone were about crash into each other. Most PDAs had some kind of Bluetooth or WiFi option / expansion and though the cost of data plans limited such access to corporate users, I could see that all the complexity and fiddliness of owning and configuring PDA+Bluetooth Adapter+Nokia Phone was going to get blown away by something that put it all together. That’s why the XDA was such a big thing (I think it needs a brief article of its own). That’s what killed Palm as a viable platform IMO. The XDA+PocketIE was to Palm as much as iPhone+Safari was to WinMo
* I’m glad you didn’t go into WebOS; I don’t think it’s at all relevant to Palm OS and would have only detracted from the article
* Overall an excellent read, thanks
Edited 2013-03-11 23:58 UTC
Thom, I’d really love to buy it – but I normally don’t keep my credit/debit cards within reach. Is there any chance Paypal might be in the future? I have everything connected to that.
Either way though, been reading for years and I’ll support you with this when I figure out where I left my wallet!
MCK = Mike Chen Kernel. While discussions were going around what to license in the port from 68K to ARM, at some point everyone realized the home-grown kernel was just fine.
LETS DO THISSSSSSSSSSSSSSSSSS
http://www.palminfocenter.com/news/8493/pilot-1000-retrospective/
http://www.palminfocenter.com/graveyard.asp
and can’t wait to read it later on today. Making me seriously nostalgic for my Palm IIIc; pretty much took all my lecture notes on that with the fold out keyboard.
I started with a Palm Pilot Pro and had a couple of cradles. It fitted really nice in to one, creating one object. Too bad I never got it to synch with Linux, but it did with Windows. I carried it around in a leather case. My Psion 3a was a much superior computer, but the Palm was more fun to use due to it’s stylus input and size. It was also more sturdy. The Psion always gave me the feeling it would shatter if dropped.
Later I bought a Palm Vx (8 MB vs 2 MB of the V). It sleek looking device. I synched it with AvantGo, which was a bit like an RSS service, synching news sites in to a mobile version readable on the Palm. My train travels become more fun reading news and playing Hearts.
My last Palm was a Palm T|X. Now I had a color screen, which was very cool. It had Tomtom navigation and even a web browser. On holidays I roamed the streets with it looking for open WiFi so I could synch AvantGo.
I still have all 3 Palms, although the Palm Vx has an alternative launcher which, for unknown reasons, keeps forgetting its settings. Rather annoying.
I can hardly wait for Apple to implement Ted Neslon 50 years old ideas in their “walled garden” and than to read, all over internet, how Apple rip of Ted Neslon (as they rip GRID, GRAIL…)
btw
good article thom, especial that history part!
Tried to buy via IE and it failed – after pressing “Ik wil dit” nothing happens. Had to switch to Chrome to do the actual purchase. Will start reading now!
I think I’ve clicked at least 10 URLs (in the PDF) that failed to open, mostly because of typos. Bad!
It’s not typos – I *just* found out PDFs don’t accept # in links, for some reason. I’m currently checking every URL – again – and fixing them. I’ll push out an updated version ASAP (I can send out a quick email to everyone who bought the PDF).
My apologies!
The updated version – v1.0.9 – with fixed links has been pushed out. You should be getting an email about it.
Thanks for pointing it out to me, jal_, this was crazy! My heart was racing as soon as I found out a few of the links were mangled .
At the end of those Thom could probably publish a (compilation of writings) book about phone OS history.
Should consider this.
It’s easy to self publish on Amazon and others.
Edited 2013-03-12 12:48 UTC
“That faithful day in the company boardroom, when they all agreed to spin off the Palm OS, Palm sealed its fate.”
Surely you meant “That fateful day”? But then you would be repeating “fate”…CONUNDRUM
I had an original palm pilot. It was not as quick as you describe. However, I was running it with a later version of palmos than it shipped with that might be the reason? Not sure really, but it could be slow & laggy at times.
The cpu on those was also just too slow to do things with the serial port. We ported our software to it, but it would take ~30 minutes to transfer a 2 mb file compared to the 5 minutes it took win ce. The arm based palms were too late for us to consider using. At the time it was trivial to take code for desktop windows and have it run on win ce/pocket pc regardless of the cpu.
I disagree with the conclusions about the palm pre’s. The software was ok, the hardware was not. I almost went with a pre2 instead of a samsung galax s as I wanted a hardware keyboard. But it turns out, not having a keyboard is better than a crappy one :/
Its okay, I want to but a motorola too, but they keep screwing me over with their hardware ( locked bootloader, no sd card, no removable battery, etc). I think the next phone will probably be a google-moto.
Damn, those 16 years went by fast…
Thom, I had a Pre 2 and did not experience the horrendous battery life you’re referring to. It must be related to your test device being old and the battery already being worn out.
As for slowness, I also couldn’t complain. At the time it came out it wasn’t cutting edge but it also wasn’t slower than similarly priced Android models (remember, the Pre 2 was priced budget/mid-range). Many operations were significantly faster than on Android (at the time) — Just Type vs. Google Search to search the contents of the phone, for instance, was a lot faster and had a much better UI. Task management was certainly a lot faster/better. Other than that, the only slowness I can recall related to issues with the mobile internet connection being flaky and maximum EDGE speed — this was due to being a U.S.-targeted tri-band phone being used in Europe (which admittedly was really annoying and ultimately the reason I moved on). But for pure UI interactions I don’t recall too many issues, at least not after the first couple of firmware versions… You just need to remember how old the Pre 2 is by modern standards. At the time it came out, for the price, it was perfectly capable.
Also, regarding quality of the hardware I never found it to be lacking. It didn’t have quite the polish of an iPhone, sure, but it felt great in the hand and the keyboard was above-average IMHO. The only issue I had was that I accidentally hit the Return key all the time while typing SMS’s, but a simple patch remedied that.
I guess you really need to have lived through it at that time, and to have believed in the future of the platform, to understand what was so great about the OS. It’s easy to forget how techy-targeted and obtuse the Android UI used to be in comparison, and how limiting iOS was without multi-tasking and notifications (it still is today, but to a lesser extent). In contrast to today’s fast and pretty Android phones it’s easy to write WebOS off as unimpressive, but at the time it came out there was nothing else like it.
Edited 2013-03-12 18:17 UTC
An entertaining article that rekindles fond memories.
There were a couple of omissions to Palm history that are important: 1) the Lifedrive and 2)NAND.
Palm introduced the $400 Lifedrive to much fanfare. It was meant to compete with Apple’s iPods of the day. I recall meeting a Palm rep who was showing it off at one of those electronic stores that has since closed. It relied on a 4GB hard drive as storage and memory, to my best recollection. This memory arrangement had bad effects. Many complaints surfaced about frequent freezes and lockups. Although it was a beautifully designed machine, but with a rather poor, blue-tinted screen (something Palm was notorious for) it was a failure.
After the Lifedrive Palm started to bring out their phone line they switched to NAND flash memory. Palm touted this “feature” by stating that users would no longer loose their data if the battery ran dead. Unfortunately, as the hard drive had done in the Lifedrive, the NAND introduced other problems such as instability and lack of instant speed so valued up until then. Software utilities soon sprang up which were needed to flush the cache.
The best Palm-made non-phone device has been acknowledged to be the T3, a device with a decent screen that I bought used. The second best was the Tapwave, which is another company with a sad story. The Tapwave had many digitizer screen problems and joystick controller issues. I had to return mine 3 times before I got one that function satisfactorily. Neither of these units use NAND and are still going strong. The Tapwave was to have been a new gaming platform. Unfortunately, it was released right before the PSP.
As far as Sony’s later offerings they were uniquely designed on the outside but woefully underpowered and way overpriced. I think the UX50 ($500 or $600?) and some others used Sony’s proprietary CPU which only reached a speed of something like 125khz. Sony never advertised the speed, of course.
I looked at the UX50 when they came out. Sony shrank the screen to fit the device so it was smaller than competing Palm screens and darker. The keyboard buttons were completely flush with the wavy designed base and made it hard to type. This is another case of form over function I am afraid.
Once Sony left the PDA business Palm was doomed partly because it had no competition to push it forward. By that time the cell phone revolution started.
Edited 2013-03-12 18:29 UTC
I had the choice between the LiveDrive and the T|X. I went for the T|X because it was cheaper and it was part of a bundle that included Tomtom navigation.
At the time I remember that the LifeDrive was a very nice and interesting device, at least in theory. Apparently it wasn’t so nice in real life use.
If the LifeDrive didn’t have its flaws it may have become successful enough to give Palm some extra hit points.
I’ve not read it yet and I’ve paid up. This is such a great idea Thom. I hope it gets the support it deserves.
Hey OSNews and Tom
I have been lurking on OSNews and reading your articles and comments ever since 2002. I registerred just to say that I purchased the PDF. Thanks a lot for your work and keep it up! I will support OSNews.com, it has given me a lot of insight over the years.
// DragonFlyBSD fan and Debian user
I know a complete Palm. WebOS part is too little, and where is Hwkins now?
Thanks for the detailed article. I mostly read the section on Cobalt, since that is primarily what I am familiar w.r.t. Palm, and I thought I could add a few things to the information you have.
Ultimately, there was very little of traditional BeOS in PalmOS Cobalt. The main technologies that came from Be were things that were under development for the “next generation” BeOS that was being driven by Be’s focus shift to BeIA from a desktop OS. The foundation for that was the Binder system which, after Cobalt went away, was ultimately open-sourced as OpenBinder and I have archived at http://www.angryredplanet.com/~hackbod/openbinder/docs/html/index.h… for reference.
Nothing like the OpenBinder software was ever used in BeOS — the implementation in Cobalt and ultimately open-sourced was the third iteration of the design, which was a complete redesign and rewrite from the binder system that was implemented (and barely shipped) in BeIA. However a lot of the implementation of the original OpenBinder code and higher-level frameworks was done on top of BeOS, until there was a sufficient environment to work on it in the core code that would ultimately become Cobalt.
There were a bunch of higher-level parts of the system built on top of Binder for Cobalt, such as a distributed view hierarchy / UI toolkit / windowing system, the media system, etc. These were by and large implemented after Be was acquired by Palm, by mostly ex-Be engineers working at Palm and then PalmSource. The “rich graphics support” was also largely the result of a rendering engine implemented by ex-Be engineers while at PalmSource. Many of these engineers had also been deeply involved in the design and implementation of BeOS, and were taking the lessons learned there to create improved designs for Cobalt. For example, the BeOS rendering system was extremely primitive compared to the new one implemented in Cobalt; the Cobalt system was actually much more like what we expect now on these devices, with full anti-aliased 2d path-based rendering and rich alpha-blending (and not incidentally designed with expectations of being able to take advantage of OpenGL-based GPUs in the future).
The graphics marketing material is actually kind-of funny. That whole “screen sizes of up to 32000×32000 pixels” thing? Yeah, well, all that is based on is the fact that pixel coordinates where 16 bit integers. Which is of course *stupid* because by this point using 16 bit coordinates is pretty stupid — it’s just for compatibility reasons, and in fact 32000 pixels is not a lot once you start thinking about scrolling through large data sets. If I recall right, this came about because some marketing people came to us wanting to know about the maximum screen resolution the new system would be able to support, and what do you really say to that? Well the maximum range of a coordinate on screen.
The initial Cobalt implementation was a transition from the old to new PalmOS. Everything under the hood was an entirely new OS with a much more advanced object-oriented application model, UI system, and other features. However, to get the first product out, there wasn’t time to finish all of those pieces, so they were only being used to implement a compatibility layer that sat on top to provide the traditional environment for existing Palm applications. I believe all of the applications you see in the simulator are traditional PalmOS applications (on top of the compatibility layer) — at this point in the new framework there were basic widgets (buttons, check boxes, etc), simple view layout managers, and a lot of other infrastructure, but it was still missing some final higher-level pieces (like list views) and the API was still too in-flux to be able to write complete applications on top of it.
A *lot* of work was put into that compatibility layer. Many engineers grumbled about how much time was being spent on it and taking away from time to flesh out the New Hotness. It wasn’t just old PalmOS in a compatibility box — all the original PalmOS drawing primitives in it were re-mapped to the new path-based renderer, each form the application made was mapped to a formal window object in the new underlying Cobalt architecture, etc. This allowed us to expose a lot of the features of the invisible new architecture to the old application model, such rendering fonts with TrueType and a new rich 2d drawing API that can be used along-side the traditional Palm APIs, or the status bar slips which were implemented by running a limited Palm API compatibility box to allow the developer to put a traditional form UI in this separate screen area. Plus there was still PACE running, so it could run your old 68k Palm applications inside of PACE on top of the traditional Palm ARM API compatibility layer on top of the new Cobalt architecture. Thinking about this too much would make your head hurt, but ultimately it all worked quite seamlessly.
As far as the lack of manufacturers picking up Cobalt, there were a number of reasons I saw for this:
– At that point device manufacturers still didn’t appreciate the importance of software (hardware was the primary focus, then software), and so didn’t see any reason to buy it when they could as well build it since they were already creating the main part, the hardware, anyway.
– Device manufacturers were and still are very aware of what happened in the PC industry, where one platform provider came to dominate it and turn all of the hardware into a commodity. They didn’t want this to happen to them, so were not only deeply suspect of Microsoft, but of any other company that looked to be trying to become a Microsoft in their world.
– This was a very transitional phase in the industry: PDA style devices were frankly fairly niche compared to the mobile phone market, mobile phones were rapidly getting more advanced and becoming more PDA like, and the Palm-style stylus-based UI did not seem like something that would be much more than a niche compared to traditional phone UIs.
– It was a huge investment for a company to ship their first PalmOS Cobalt product because the entire platform was unique and so needed custom driver work for every piece of hardware on the device. This was part of the motivation for the later switch to adopting Linux as the kernel. (It was also a significant part of the reason for Android building on Linux, which worked out very well there. Linux is entirely sufficient as a kernel for a mobile OS, it’s the stuff that is normally taken on top in user space that will kill you. Which kernel is being used for a device is basically irrelevant for the user’s experience.)
– As far as Palm not adopting PalmSource, I didn’t have enough interaction with them to be able to more than speculate. There was definitely a lot of bad blood with the Be acquisition, where Palm engineers saw the platform they had been working on for years being thrown away and replaced with something new (though not with BeOS in any real sense, just with new software designed and written primarily by Be engineers while at Palm/PalmSource). Many of the engineers who were most unhappy with the upheaval in the software at PalmSource moved back to Palm to continue work there on the old PalmOS platform.
One thing I don’t think that had anything to do with Cobalt’s adoption was Apple. If nothing else, the timing just doesn’t work out: during the time from when Cobalt was done and available to manufacturers until Apple first showed the iPhone I was involved with implementing a large part of a completely new UI design based on the new frameworks in Cobalt, watched that get dropped, left the company for Google and, starting at pretty much ground zero of a raw Linux kernel, worked with a small team to build an entirely new operating system that was well on its way to being done. Cobalt was being shopped around in early 2004; Apple didn’t acquire FingerWorks until 2005. Even way later when Android was being shopped around it was hard to get interest in that (even with it being open source!) due to the same issues with manu
Whoops, got truncted; here is the rest:
One thing I don’t think that had anything to do with Cobalt’s adoption was Apple. If nothing else, the timing just doesn’t work out: during the time from when Cobalt was done and available to manufacturers until Apple first showed the iPhone I was involved with implementing a large part of a completely new UI design based on the new frameworks in Cobalt, watched that get dropped, left the company for Google and, starting at pretty much ground zero of a raw Linux kernel, worked with a small team to build an entirely new operating system that was well on its way to being done. Cobalt was being shopped around in early 2004; Apple didn’t acquire FingerWorks until 2005. Even way later when Android was being shopped around it was hard to get interest in that (even with it being open source!) due to the same issues with manufacturers and their relationship with software. In fact, the introduction of the iPhone I think was very good timing for Android since it finally burst a lot of bubbles about the (lack of) importance of the software platform, and here Android was basically already ready to provide a similar software platform for other companies to use.
As you hint, there were indeed actual PalmOS Cobalt devices that were under development, in conjunction with the software work on the platform. There was one major company that PalmSource worked with for quite a while and was close to shipping, and then that company canceled the project. (I heard later that standard practice for this company was to have a bunch of devices under development at the same time, and then towards the end kill all but a couple that they thought had the most potential. I don’t however have any idea what the actual circumstances were for this, just that it came as quite a shock to the engineers because we thought things were going well.)
I mentioned above about another UI design built from Cobalt, which I didn’t see mentioned in the article. This was actually shown publicly — http://www.mobileread.com/forums/showthread.php?t=4153 is a reference to it that I found with a quick search. This was much more than a concept; this was the new modern PalmOS that was built directly on top of the new frameworks that were hidden away in Cobalt, and had a significant amount of working implementation. A big point of it was to provide a UI that works *without* a touch screen, because the traditional PalmOS design requiring a stylus touch screen had actually been a big stumbling point for getting interest from phone manufacturers. And there actually was significant interest from at least one manufacturer, who ended up in a bidding war with ACCESS over PalmSource because they wanted to get the Rome platform. (That said, I think PalmSource would have many of the same troubles trying to license the software to other manufacturers, for many of the same reasons Cobalt had trouble.)
Finally, I would like to say the design of Android actually took a lot of inspiration from Palm. Many of the core engineers on Android came from PalmSource (most having arrived there from Be), and saw Android as an opportunity to do what PalmSource was trying to accomplish in an environment that was more likely to succeed. For example, Android’s Intent system is actually a greatly evolved version of PalmOS’s sublaunching, based on a lot of ideas in the Rome architecture. I find it amusing reading that linked article about examples we had shown of what Rome was doing, which have a direct lineage to things like Android’s sharing and other Intent-based features. Or another example is Android’s initial design to support different density displays (created long before Apple’s whole Retina thing), which came directly out of our experience with how PalmOS was dealing with different densities and how we could make that so much better if we baked that concept into the platform from the start. You can actually trace an evolution from Palm’s original attention manager, to the status bar slips in Cobalt, through the notification facility in Rome, to the notification system that has been part of Android since it first shipped. And Android’s application model based on a single foreground application that must be able to save and restore its state as you leave it and return has a strong lineage from work at PalmSource on how to add multitasking to Palm’s original single tasking model.
thanks for more details.
what does a “distributed view hierarchy” means? does it means that some views can run in different processes?
Can critical components (e.g. flash) run in a different process and render into a local view? Or was it that way that view and component are one unit running in a different process? thanks for the nice insight btw!
Yep, it allowed you to have process boundaries at any points in the hierarchy, and one of the significant motivations was indeed for dealing with things like flash content. Basically the entire UI was one single distributed view hierarchy, rooted at the display, with the window manager sitting at the top owning the display and the top-level children there. If you know about OpenBinder, a short description is that in the design every View was an IBinder object, so it had binder interfaces for accessing the view allowing these to go across processes. In particular there were three interfaces:
– IView: the main interface for a child in the view hierarchy.
– IViewParent: the interface for a view that will be the parent of other views.
– IViewManager: the interface for managing children of a view.
These interfaces also allowed for strong security between the different aspects of a view. For example, a child view would only get an IViewParent for its parent, giving it only a very limited set of APIs on it, only those needed to interact with it as a child.
You can look at the documentation on the binder storage model at http://www.angryredplanet.com/~hackbod/openbinder/docs/html/index.h… to get a real-life flavor for a similar way to deal with objects that have multiple distinct interfaces with security constraints between them. In fact… the view hierarchy was *also* part of the storage namespace, so if you had the capability to get to it you could traverse down through it and examine it, such as from the shell!
Many of the fundamentals of this design were carried over to Android. Android dropped the distributed nature of the view hierarchy (switching to Dalvik as our core programming abstraction with Binder relegated to only on dealing with IPC lead to some different design trade-offs), but still we have our version of IViewParent in http://developer.android.com/reference/android/view/ViewParent.html and the basic model for how operations flow down and up the hierarchy came from solving how to implement that behavior in the Cobalt distributed view hierarchy.
Find that pretty interesting stuff!
I don’t quite understand why switching to Dalvik made you drop the distributed nature of the view hierarchy though.
Looked into OpenBinder before. As one part of my PhD project I look into how the behaviour of an application can be changed at runtime (basically by rewiring components). For the prototype I used many ideas from OpenBinder and especially pidgen to generate the base object binder code. However, I mixed in a bit of qt’s thread safe signal/ slot semantic to make it easier to make stuff thread safe (http://qt-project.org/doc/qt-4.8/threads-qobject.html).
Is there actually something you miss from the Cobalt platform (binder related) that is not in Android?
Part of this is that a lot of those features in OpenBinder were based on creating a dynamic nature as part of the core binder design. When we went back and started on a new platform based on Dalvik, we already had a language with its own dynamic nature. Just taking what had been done for OpenBinder would leave us with these two conflicting dynamic environments. We even ended up dropping the basic ability to have multiple binder interfaces on an object because that didn’t map well to the Java language. (In theory you can still implement such a thing on Android based on the native binder framework, but not in Dalvik where most of the interesting stuff happens.)
There was also just a practical issue that we couldn’t take the OpenBinder code as-is for licensing reasons, so we needed to re-write it for what we shipped in Android. The development schedule for Android was pretty tight, so we needed to be really efficient in building the system, and reproducing all of OpenBinder and the sophisticated frameworks on top of it that weren’t open-sourced would have been a lot of work that was hard to justify vs. what we would get by going back and doing something slightly different that leveraged a lot more of Dalvik.
And ultimately it was a different team that built Android — yes some key people were from PalmSource with the experience with Cobalt, but there was a lot of influence as well from people coming from Danger, Microsoft, and other places. Ultimately ideas from all those places were mixed together by picking and choosing those that seemed best for the project.
I also think that from a development perspective building most of our system services on top of Dalvik has been a good thing for Android. The Dalvik environment is just a much more efficient development environment than C++; with all of our very core system services like the window manager and package manager written in it, we can move much more quickly in evolving our platform and more easily triage and fix bugs. (Given a crash report from someone’s device, you are very often going to be able to identify and fix the problem when it happens in Dalvik far more quickly issues than in native code.)
Oh if you have a decent understanding of the apk and process design of Android, it may be entertaining to read the process model documentation of OpenBinder: http://www.angryredplanet.com/~hackbod/openbinder/docs/html/BinderP…
A *lot* of the ideas there appear in Android, transmuted for the new environment they find themselves in. Keep in mind that this process documentation is describing a core foundation of the code that was running PalmOS Cobalt and then Rome. We hadn’t quite gotten the full security and process model in place that we wanted for third party applications, but you can see the comments at the bottom there leading to very similar concepts in Android — signing to determine where things can run, permissions, etc.
It seems there is little or no reason provided to explain why Palm failed on the market.
And no, i’m not talking about the “WebOS” part, which is there, but which is “no longer Palm as we know it” in my opinion, but “another kind of Palm”, Palm 2.0, or Palm ReBorn if you wish.
But the previous lines of PDA, the one which was dominant with 90%+ market share and insane margin, how come it finally died ? I would love to hear some explanations on this one.
Wow, this article took me a little week to read entirely, bit by bit, but it was certainly worth it. You took your research work quite far and did a good job at making it interesting !
I’d gladly buy it just for the sake of saying “I am ready to pay for such articles”. But for this, I’d need some other payment methods, since I can’t bring myself to trust banking cards. Could you please try a billing service which accepts Paypal payments on a future article?
Edited 2013-03-14 15:21 UTC
I still have my Palm 5000, and my Kyocera 7135 flip-phone. I also owned the Treo 650 and the 700p.
And last but not least, the Treo 755p, which was retired just 3 weeks ago for a generic LG L9 Android phone.
I avoided the Centro because of its tiny screen. Today there is no phone that matches the 1/4″ font size used in the address book of my Treos. Not one. They all use tiny fonts on megapixel screens that my tired eyes can’t read without glasses.
Most of all, I miss the terrific Treo keyboard that no other phone could rival (and I tried them all). When I got my 650 I gave up Graffiti instantly, and I never looked back.
Until today, when I installed Graffiti for Android, just to get away from that annoying touch keyboard. It all came back in a rush. Entering text without looking at the screen! Without a stylus! Yippee! I’m FREE!
Your article was so perfectly timed for me, letting me know for once and for all that it was time to let PalmOS go.
But not Graffiti.
Thanks.
Edited 2013-03-16 03:20 UTC