The web browser has been the dominant thin client, now rich client, for almost two decades, but can it compete with a new thin client that makes better technical choices and avoids the glacial standards process? I don’t think so, as the current web technology stack of HTML/Javascript/Flash has accumulated so many bad decisions over the years that it’s ripe for a clean sheet redesign to wipe it out.
For the uninitiated, let’s start with some definitions. Thin clients have been around since the early days of computing, as there has always been a need to provide compact User Interfaces (UIs) remotely, such as with text terminals or the X Window System. I define thin clients as client-server platforms that use at most 10-30% of desktop resources such as the microprocessor and that do not deploy general-purpose program code, as exemplified by javascript in modern web browsers. Alternatively, rich clients have richer UIs and usually deploy code that then absorbs a larger share of desktop resources, such as C# in Silverlight and Java in JavaFX. HTTP/HTML started out as a thin client but has been turned into a hobbled rich client with the addition of javascript, as it’s stuck using HTML for legacy reasons. The other widely deployed rich client is Flash, which has evolved from a simple animation platform into an often inefficient rich client.
I argue that rich clients have efficiency and security problems that we should push off for later: right now, we should focus on building a better thin client. The essence of a thin client is to implement the most common graphical UI elements across multiple hardware and OS platforms while maintaining security. A thin client can be thought of as a first stab at ubiquitous network computing, where the security hassles of having the information superhighway coming into your PC- and the internet bandits and thieves who can now drive right up to your computer door- are handled by the thin client platform so that developers don’t have to. I estimate that 30-50% of all applications can be implemented on a thin client platform, as most applications don’t need much more than a thin client provides. And finally, thin and rich client platforms are hugely important to the future of operating systems, as I can write this today in a web browser using FreeBSD on my desktop because of the web’s ubiquity as a thin client.
But what would a better thin client design look like? It would provide better and more modern GUI elements than HTML, updating the standard widget set since HTML was initiated almost 20 years ago. Sessions would be the atomic unit of its network model, focusing on highly interactive user experiences rather than the old-fashioned request-response model used by HTTP/HTML. A binary encoding would greatly increase network efficiency, minimizing much of the wasteful uncompressed text sent over the network since I estimate that HTML makes up approximately 5% of network traffic. Graphic designers would use GUI tools exclusively to work with this binary format, which works out perfectly as nobody wants to muck around with a markup language like HTML anyway.
Finally, the internet standards process has not been beneficial for application protocols like HTTP/HTML. Rather, it has led to multiple implementations that each has their own quirks. Part of this is because of the vagueness of important sections of open standards, perhaps it is impossible to precisely detail any reasonably complex technology. Part of it is because every implementer tries to differentiate themselves by adding special features that are incompatible with alternate implementations. What is important for a software platform is that it is open to reimplementation and the resulting competition, allowing developers to always have the choice of moving to another implementation, not that it conforms to some preconceived standard. I’ve laid out some ideas on what should replace the standards process at my blog.
A better thin client design would follow the contours of these choices but adoption is the primary issue for any new platform. However, building a new thin client need not be a giant undertaking as much source code can be reused from existing open source web browsers, such as cross-platform image and text libraries. The better user experience enabled by such a platform would attract internet application developers and it wouldn’t be hard to write a HTML->new thin client format converter on the server side, that could convert HTML and the trivial javascript used in AJAX to the new binary thin client format. This would allow developers to reuse existing HTML and AJAX code until they can port it thoroughly to the new binary format. Also, one can bundle an existing open source web browser, such as Google’s Chromium project, with a new thin client on the client side, so that both technologies can be used side by side in the same program. Building in a complementary technology like a micropayments system, which the web has appallingly failed to deliver to this day, would constitute a “killer app” platform.
The web, as composed of HTTP/HTML/Javascript/Flash today, is a highly inefficient and insecure internet application platform. We can build something much better by rethinking some basic design choices. Instead, we have been incrementally building on the web stack for 20 years now and it shows. The web is ripe for disruption: it’s not a question of if it will happen, only what will replace it and when.
—
Mufasa blogs about GUI and thin client issues at A new thin client.
I’m a bit tired of people trash talking the web as a platform. It did suck to develop for 3 wildly different browsers 10-15 years ago, especially since we didn’t have things like firebug.
Nowadays that pain is virtually non-existent, especially compared to what it used to be. Javascript is an absolutely wonderful language, and CSS + semantic html is (in my mind) a much more elegant UI development experience then anything else I have run across. Given the choice between client side or web development, I will choose web every time, not for any other reason then I enjoy working with the technology.
Now, there while there are A LOT of people who think like me, you may know a large group of them, they are called google. The most important company on the net is 100% behind web technologies, is not trying to reinvent the wheel or displace them with new protocols, and is choosing web technologies over client apps in its upcoming client os. The leader in people who don’t agree with me (adobe with flash), is ALSO now choosing to push web technologies on to the desktop with AIR, which uses html/javascript/css. The palm pre, which is the only one to come close to being an iPhone killer, also is choosing html over other choices for its SDK.
I don’t think the web is ripe for disruption, in fact I think the opposite is true. Way back when java applets were used, they were only used because people had no other option. As soon as they had an option, there was a mass exodus to flash. With web technologies not only is that not happening, but given a choice, people would rather use them for things other then websites over the alternatives.
I will choose client every time. Creating a client side app that dynamically creates tabs is easy. Creating rounded buttons is easy.
On the web, you can dynamically create tabs on the server side with PHP / Python / CGI / whatever. But then you need to use CSS somehow to have the currently selected tab appear highlighted. If each entry in your menu has its own CSS class you now need to dynamically generate CSS. What a pain. Alternatively you could use Javascript to make it appear highlighted. Now you’re using three different technologies all loosley bound together.
Now, go and create a nice rounded button. You need all sorts of CSS hacks to do that. You need 5 different image files. Go change the color of it and you need to split it up again.
Don’t get me started on layouts.
Web sites that try to act like native apps with tabs and layouts all get over complicated. Every click needs to travel roundtrip and to make it appear that it isn’t you use AJAX to further complicate things.
If you dissect a native app you may find complications as well. You could say every click needs to travel round trip through layers of APIs down to the kernel and system interrupts back up to the userspace program. The thing is though, that this is transparent to the user as well as the developer. Maybe the web just needs better tools to make development nicer. GWT looked pretty good from the beginning and is looking even better with the changes that Google Wave needed. Maybe GWT or other technologies like it are the answer. Haven’t read about SproutCore since it came out…not sure if it is similar to GWT or not.
I’ll admit, a lot of this has to do with taste. My point still is that the majority of people are not going to jump ship the second an alternative is available.
In css3, border-radius is defined. Works for the latest version of everything except IE, which will ignore it.
Changing the color of a tab requires changing a single value on a single line. If its a gradient, you will need a new image. gradient is a part of CSS3, but webkit is the only browser implementing it at the moment.
I actually find layouts far simpler in html then with layoutmanagers in client apps, and require a lot less coding.
AJAX is insanely simple nowadays. In rails, for a form to submit asynchronously you just use the form_remote_tag helper instead of the regular form_tag. As I said be before, link_to_remote is also available. Both of these are about as hard as making the actual form or link tags normally.
The biggest problem I find people have programming for the web is that they spend an hour looking at it, assume it is easy and that they get it, and then get frustrated when they hit a wall. My stand-by interview questions for web related jobs are “Explain prototypical inheritance in javascript” and “Explain the difference between block elements and inline elements” or “How would you write a selector for all p tags inside a div with an id of “paragraphs””. These are very basic questions, and if you cannot answer them it means that you didn’t know how to write web applications. It is downright shocking to see how many people can’t answer them, especially the one on javascript inheritance. These are the people that then turn around and write mountains of awful code, or long blog posts about how much html and javascript suck.
FYI… I just googled prototypical inheritance and got “Did you mean: prototypal inheritance”. Hopefully that was just a typo and you don’t pronounce it that way.
I used to say “orientated” instead of “oriented” and my co-workers never let me hear the end of it.
I knew someone that pronounced Gigabyte as Jiga Byte… sounded like an idiot.
Yeah…. it was a typo. I wish you could edit for more then like, five minutes here, because any time I write anything more then a few lines it is usually riddled with typos.
I prefer “oriented”, but “orientated” is accepted as an English word. Pronunciation of gigabyte with a soft ‘g’ is also accepted, though not the preferred pronunciation. I prefer it, though. (b’duh! b’duh! b’duh!)
http://dictionary.reference.com/browse/orientated
http://dictionary.reference.com/browse/gigabyte
1.21 jigawatts?!
This is simply BS. You don’t need roundtrip for showing tabs and layout. And you also don’t need AJAX for that. You can use AJAX to dynamically update a page. That means you load a small amount of data from the server and add that dynamically in your page. That is cheaper than web1.0 roundtrip and reloading the page.
You don’t need roundtrip at all for showing tabs. You can do that with Javascript, playing with the DOM and CSS. And using a javascript library such as Dojo will give you a set of predefined components that makes it really easy to show tabs on your page, and to have buttons with rounded corners.
Don’t be mistaken. I am definitely not of the opinion that every client application has to become a web application. I do think that the big power of web applications is related to deployment. Deployment or roll-out of web applications is much much easier and cheaper than for desktop client applications in for example a big company with hundreds of desktop computers.
In any case, your post shows that you are not really familiar with the technology used in web applications.
While I also believe the web apps to be too complicated for now, I can´t really get Your point. Why the hell should I bother with some special sort of buttons style? The thing user needs most is the coherence, so if all sort of buttons and font (except for generic serif/sans/monospace/symbol) customisations migrate from CSS/whatever_on_server_side to browser/whatever_server_independent, this will be the gretest day in webdev history.
P.S.: Same for client-side-only apps actually.
[quote]”The web, as composed of HTTP/HTML/Javascript/Flash today, is a highly inefficient and insecure internet application platform.”[/quote]
Unless the Porn Industry pack their bags and leave this web, I dont see another web application until Gov’t Ruled Internet is in effect.
The web is open, there will always be good guys and and bad guys. A New Application is not the Solution to “inefficient & secure” internet.
Hi,
Most thin clients actually run protocols like RDP, VNC, Citrix ICA and X11; where a web browser runs on the server.
If the “thin” client is fat enough to run a modern web browser (and the OS underneath the web browser); then “a better thin client design” would probably involve distributed computing. The funny (ironic) thing is that most modern “thin” clients actually are fat enough to run Linux/WinXP – the only thing that makes them “thin” is the software installed on them.
-Brendan
To the author:
Don’t write articles where a *nice sun shines trough the clouds and will guide you to the path of salvation* . I want F***ing TECHNICAL details. Such a time waste.. (making it worse by commenting).
Just try one thing : remove the terms thin and fat client in the article. This will force you to be much more specific.
Cheers,
zimbatm
I modded you down because of your use of language – it was totally unnecessary.
Suggestion: If you’re that unhappy with this article then could always write a follow up piece instead of swearing at those that bothered to give up their own free time for you.
the problem is, anyone can come up with ideas but wishing does not make it so.
This article is like writing an article saying we should switch to cold fusion plants for at least 50% of our power needs sometime within the next 20 years. Without a detailed technical description of how these plants can actually be built the article doesn’t say much.
Also writing a follow up piece is kind of pointless since all it would say “sure, but how?”
This article was completely unneccesary. You mentioned it yourself: there’s silverlight, javafx, flash/flex, etc. These are all very good for rich clients. Besides you can create gui apps that run on X in wxwigets, qt, gtk, java/swing/swt and the list goes on and on and on. All these can be run through vnc, and so on. There’s no point in ‘inventing’ something ‘new’ again, because that ‘new’ thing would essentially work the very same way. I’d be very much surprised if not.
The web works because of the way it has evolved. You can’t just take that success and transplant it onto another stack.
“Graphic designers would use GUI tools exclusively to work with this binary format, which works out perfectly as nobody wants to muck around with a markup language like HTML anyway.”
*nobody* ?? Really? What evidence supports this?
Would they prefer to rely on expensive paid-for or free “design” tools.
Either sounds like a *major* step backwards, vendor lock-in, risky.
No. The existing stack is open, flexible, proven and get this: *not difficult*.
If anything, some effort needs to go into improving JS frameworks, specifically AJAX frameworks (mootools, prototype, JQuery etc), building on HTML5. Google and Apple will most likely make a major contributions here.
Other improvements could include efficient media handling, lower down in the stack (including TCP – which is something Google *is* looking at), not in the parts that are proven to work.
Completely agree. The big problem with the web is that the W3C blows. We need a process that doesn’t take 5-10 years to come up with a standards recommendation.
Semantic elements, web workers, canvas, border-radius, gradient, and column-layout solve most of the things that are irritations with writing for the web right now. Unfortunately, it is going to take another five years or so before this spec gets out the door, and people can start working on the next list of annoyances.
+1
I think every platform that evolves and adapts will be very hard to beat/replace. And it seems open standards and platforms are inherently better at this than some grand brainchild of some company/single developer.
But there are some use cases where HTML/JS won’t work well that is why Google is working on native client.
Someone’s been working on this with something called “Extensible User Interface Protocol”, or XUP:
http://www.w3.org/TR/xup/
What it’s missing is a well defined client presentation (the actual UI elements that this protocol should display, etc.), but I saw a Swing-inspired sample that would be pretty usable.
I spent some time on my own with something similar, but I haven’t done much with it recently. It is connection oriented, like telnet or ssh, but defines the client UI and the protocol for constructing it, as well as events sent to the server (actually is connection agnostic, so it could even run over RS-232). If it ever gets done, it’ll be called “Holistic Interface Control Protocol” (HICP).
Using a TCP connection makes it immensely more responsive than stateless HTTP based UIs, but obviously has its own resource usage issues. The client is not much smarter than a TTY, so needs only a small generic runtime supporting a basic GUI toolkit – no application specific downloads needed (not even those growing Javascript libraries send as part of active web pages).
I still like the idea and intend to get back to it. My prototype doesn’t do more than let you define windows, buttons, labels, and text boxes (and associated events), but I think it shows the potential (prototype client is in Java, server is in Python).
The web’s not broken enough to warrant a complete rewrite. Good secondary school essay, poor OSNews article.
Don’t let the bad feedback put you off contributing but if you want to throw out the current “web client” stack you need to replace it with something that craps gold.
Dont forget we might lose some democracy and privacy if BIG corporations are going to rule EVEN MORE of our computing.. People must start to think about privacy concerns with all this push towards thin clients .. Start lobbying your goverments to regulate the providers better in this Age of the Internet ..
Funny, but I never thought of browsers as being either fat or thin clients. I think of them more as interpreters of a markup language that try to visualize the information for you. They are clients in the “you are my supplied and I’m your client” but not in a “server <-> client” sense. They do fulfill part of that job when downloading content, but that is actually a very small part of their function.
The whole “browser as a {thin,fat}-client” starts to be more true in an ajax setting. But given some new developments it can also act as a standalone application environment. But it’s original purpose is still used pretty much by 90% of the people on the internet: viewing content.
Even in a “web 2.0”-like setting, content is still one of the main motivator (blogs, micro blogs, movies, internet radio, foto-sites, encyclopedias, ..). If you strip away all those additional features and functions you can still get a reasonable view of the content.
The biggest problem with the current “stack” is that some things can’t be formatted as content because content specification has lagged behind. The current HTML 5 work is a great step into closing that gap, but with the renewed browser “battles” we can expect faster progress in this area.
Perhaps in the near future everybody will have the joy of viewing vector drawings without any proprietary plugins, but as readable content.
And I do think that text markup is the way for any open protocol. For it forces different parties to show others how they specify things. So others can (if they want to) easily create compatible works, something which is hard to second-guess in a binary only world). And you have a very rich and expressive environment. Sure you can make greatly expendable binary protocols, (e.g. ASN.1 based DER). But you have to be very careful not to have id clashes. So we use OIDs which (to be globally unique) are pretty large. And centralizing the unique identifiers is just stagnating development. So the gain is not that big. And seeing as to how many parties made insane bugs in independent binary interpretation of the same protocol is not very encouraging in that it’s more reliable.
I think we can do better, but I think that a real solution is in the direction of better separation of “chrome”, actual content and meta-data. Here independent implementations have the best way to distinguish themselves into getting the same basic thing but better. For real standards are determined by and through the majority using multiple interpretations of those standards.
I’m an old school embedded programmer. But I’ve recently started doing some web development. More towards internal tools and what not.
I sit around wondering WTF have these web standards people been doing. I really do. The web gave us a new chance to develop a platform with all we learned from application development over the past 30 years… and what did we do? We repeated pretty much every mistake we made in application development for the web and some new ones to boot.
It took us sometime to learn to separate GUI presentation from application logic. Here I am looking at web code and everything is scribbled into one file. Everything is untyped…
Now don’t get me wrong. I know this is how things evolve I’ve been in more than one networking standards process to know how ugly things can get. Everyone pushing their way, trying to maintain backwards compatibility, hacking things onto current frameworks… It’s certainly understandable to see how the HTML part of the web which began more as a document format ended up like this.
I don’t know. These are not new problems. Event handling, manipulating objects… are all easily solved problems. It’s just hacking these onto an existing base.
My own view:
The web doesn’t need a new thin client. It definitely needs some better development tools and frameworks. Even if it restricts what you can do. I’d like to see a lot of the details pushed underneath. To treat a lot of the underlying stuff, the way C programmers treat machine code. Yes we know it’s there. We know how it works. If push came to shove, I can deal with it. But I don’t want to know the details every day at work.
Rich clients can and will become more popular. But they’re not mature enough yet. I’ve played with Silverlight 2. You can certainly see the potential. If I had to develop any complex application (charting…) I’d most certainly use it.
I think the author is missing the point in a few places. For example:
We already have an alternative to sending uncompressed text over the network. Compress it. All web browsers (and most other HTTP clients) support gzip and deflate compression, and virtually all web servers support it.
As long as you’re encoding the same information, there isn’t going to be much difference between a binary format, and a compressed text format. Markup compresses extremely well.
Except for the fact that writing HTML, especially with a good editor, is extremely fast. Any editor capable of dealing with everything HTML can do (and XML, for that matter) needs to be complex, and any less capable editor isn’t really useful.
Just a note – Silverlight uses XAML, which is XML. It has a UI designer, but it’s almost useless. It’s far simpler, and more productive, to edit the XML directly.
Certain parts, like templates, are easier to edit using tools (like Expression), and designers would be using those. Just as designers working on web pages would be using a good CSS editor, rather than editing the CSS directly.
Flex uses much the same approach – the UI is defined using XML, and it can be separately styled. Using an extended version of CSS, in fact.
Yes, lots of web applications are badly written, mix functionality, presentation, appearance and data freely, and are written in such a way that you can’t use the better tools that are available. That doesn’t mean that they have to be.
Insecure? Only because of legacy crap – the days when web browsers didn’t bother with security at all. These days, browsers are actually very secure in themselves, and they’re providing new functionality to make web applications more secure as well.
The reason there are so many security vulnerabilities in most web browsers is because they represent the primary attack surface. Any replacement would have the same problems here.
Compare the number of vulnerabilities found in major web browsers to, for example, Flash. Or Acrobat.
It’s not only a network latency/bandwidth problem. Computers don’t like compression and text parsing. They will happily do the work, but not very fast.
Think about what happends when sending a compressed HTML/XML:
Sender:
1. Create text in buffer (memory and text creation overhead).
2. Compress the text (cycles and more memory touched).
3. Send data.
Receiver:
4. Receive data.
5. Decompress data.
6. Parse text.
With binary you skip step 2 and 5, while the rest have a smaller overhead.
Then people shout “interoperability” and “human-readable”!
Interoperability: When computers talk to each other they don’t care about the encoding as long as they agree on the format.
Human-readable: This is only needed for debugging; then use a debugger! You just need a tool that can transform the binary to a human-readable format.
Imagine if machine code and the TCP/IP headers were text. Would you defend such a design?
Edited 2009-08-11 01:41 UTC
Why would you not compress the binary format?
A typical HTML page contains almost entirely text. Not markup – text. Content. That wouldn’t change just because you’re using a binary format, that binary format would still contain large amounts of text.
The solution? Compress it!
Then you end up with a compressed text format, and a compressed binary format, which are about equal in size. Both are easily machine-readable. Only one is human-readable. Simply using a binary format is not going to make the tools magically better, so you’re really not going to get any better tools than we have now.
Of cause you can do that, if it makes sense.
Well, HTML is somewhat okay for documents, which it was meant for. However, the markup is dominant in most dynamically generated HTML pages. I.e., this osnews editor page is 366 lines of markup, with very little content (the quote of your comment 🙂 ).
In the case of documents I agree. However, XML is missused for computer-to-computer communication and many HTML pages are mainly formating markup.
I don’t talk about tools. I talk about performance and useability; Smaller server load and better responsiveness in the browser (client).
Edited 2009-08-11 02:51 UTC
If we’re planning forward we should remember that to date technology has kept far ahead of demand. Ie broadband capacity will increase so fast that saving ‘5%’ of traffic will make very little difference to the interneet. Besides, a lot of the text sent (HTML) can be gzipped behind the scenes today, making it only slightly larger than a purpose made binary format.
Whilst a session based network model sounds attractive, so much work has gone into making the current system work with sessions that it would be a hard sell asking developers to move to any new system. They would want to know what new things the system can do. Since sessions can already be abstracted with things like Rails and Javascript libraries the web developer doesn’t need to worry about the implementation details at the moment.
I don’t understand the criticism of the current technology as insecure. Developers will always be able to write insecure software. The current TLS (https) system allows complete security for the end user.
I do agree that Flash should go away for the benefit of everyone. The best way to achieve this being to integrate more functionality into the current systems (like the HTML 5 video tag, and SVG).
I consider it far more likely that the only thing now capable of replacing the the web standards that exist, are later versions of those same web standards. If you want to make the web different, I think you’ll have to join the standards bodies, rather than replace them.
If you haven’t seen it already, I strongly recommend watching the demo of Google’s Wave at http://wave.google.com to really see what is possible with the current technology. They talk specifically about how the minutia of the low-level implementation is taken away from them by the Javascript libraries they used to create Wave.
There’s too much room for the current technology to grow for it to be replaced at this point.
When I say “thin client”, I mean a thin client of the “smart terminal” genre, vs a thin client in the X-Windows/VNC/RDP genre.
In “thin client” mode, the modern browser is little more than a “smart terminal”. Smart terminals have addressable cursors, fonts, colors, line graphics, some even had downloadable character sets. That’s effectively what a “javascript free” web browser is today.
Add in javascript and XHR, and the web becomes a fat client.
The key limitation of this kind of thin client is extensibility. No matter how rich the “widget set” you allow coded in to the standard, someone will want a different one. Whether as simple as formatting to as complex as creating a new widget, despite our long history in the business, no one has come up with a complete set. The needs are always changing, and creativity knows no bounds.
The modern web browser is limited in this way today.
However, it does have an out, of sorts. Today, browsers implement the Canvas tag. The canvas tag lets us pretty much build most any widget we want. It’s a bit map canvas that can take mouse hits, which is effectively what most any widget is, at least at their core.
However, the heart of a canvas widget is not canvas itself, but the code that powers it. Dowloaded code executing on the client machine.
That’s all that Flash is, a square in the browser window with code behind it, code downloaded from the network. Canvas and Flash, at 30,000 feet, are “the same”. Flash just happens to be more efficient with a richer toolkit.
So, frankly, the quest for a “better thin client” is simply a quest for another thick client, in the end. The web is where it is at today as a combination of capabilities and the market driving it.
There are already revolutions on several fronts (as mentioned), but the web is winning anyway. Each of its competitors as a rich client application platform are working in parallel on independent paths, whereas the browser makers on working on converging paths. Any good idea is getting rapidly incorporated in to the entire platform. And with the openness of the implementations, the web moves faster.
People will never agree what is the best, so the result will be as usual. As we can see, the most popular platform on the market is always the one which is the worst technically.
All nice and good, but I do not find it sensible if the HTTP protocol is not drastically changed.
First replace the wooden foundation with reinforced concrete. Then fix the plasterwork. Not the other way around.
Recent milestones are HTML5 and IPv6, but the Internet’s foundation has gotten stuck with the 1997 HTTP 1.1.
Just to put things in perspective: 1997…
– the heyday of Yahoo, Homestead, XOOM and websites using frames
– the days of Flash 2.0, HTML 3.2 (4.0 was introduced in December that year) and the introduction of CSS 1.0
– the time before Napster (estb. 1999), Google (estb. 1998) and YouTube (estb. 2005)
We have been stalling an HTTP protocol overhaul for way too long. We are sending huge files using the hyper TEXT(!) protocol (which is really a hack if you look at it objectively), for which the *FILE* transfer protocol (FTP) had originally been created.
Errr… what exactly is wrong with that? We’re using ssh instead of telnet, too. Hack?
What’s the point of that when you can invent new problems to solve on shaky foundations and put lots of effort into making concrete out of plaster?
For instance, we’ve had some pretty nice high-performance binary RPC mechanisms, but all the rage these days is XML-RPC over HTTP (via AJAX). Or even better, JSON. At least the acronyms sound cool.
XML-RPC makes sense sense for web services because a) it goes over port 80, which means less firewall headaches, and b) it is xml, which is designed to be parsable by anyone (including humans)
JSON makes sense because XML is incredibly verbose, and if you are communicating back and forth with javascript anyways you may as well be using its native object notation anyways. It still uses text, again, because you want it to work with web servers.
Binary RPC calls make sense when you control both the client and the server, and performance outweighs interoperability.
Why does that have to be the case? Even XML isn’t so easy to parse, so people write reference implementations, etc. Why is a binary RPC system much worse? A human can’t read it directly… but I don’t really see why people are so enamored with reading data directly that is usually parsed only by machine, especially given that many folks crunch down the XML or JSON data anyway in order to transfer fewer bytes.
In my view, human-readable does not always mean ‘interoperable.’
Um? So? Naming aside, what’s the problem? HTTP really has nothing to do with hypertext – it’s just a simple protocol for accessing files, much like FTP. In what way is it a hack?
Okay let’s bring this back to the topic of the originating article: the author suggests many changes at a higher level. My point simply is that if you want to “change”, it would be best to just be radical about it.
Now why is it a hack? (Although, I must admit that “hack” was too strong a word. My apologies.) Let me illustrate this with yet another protocol example:
SMTP is the most common protocol for sending mail. It was never intended to be *the ultimate solution*, considering the fact that it was called “SIMPLE mail transfer protocol”. All it does is move things around. The problem with that is its spam handling, which was not anticipated at the time. So we went and created workarounds.
Now I’m not saying the current (E)SMTP implementation is necessarily bad – it works, so we can live with it. But my point is, again, that if you want to turn things around, why not start at the root, which has been left unattended for way too long?
There is no doubt about it that the pioneers of the Internet would have done things differently. They were, however, limited by the technological advances and insights at that time. But why is it so weird to claim that what I see more like a “firmware update” would vastly improve the Internet as we now know it?
HTTP *works*. Otherwise the Internet would not have become what it is today. But I’m not for the “if it ain’t broken, don’t fix it” mentality. HTTP was not meant to do what it does today.
Besides, let’s not only look at today. Let’s consider what lies 10 years ahead of us. ZFS and other file systems are anticipating the increase of data load. Soon our Internet will be fast enough to not even need compressed video any longer. If OSes already have to turn their entire file system around, why should this not be necessary for the Internet? It will have its limitations in its current form.
I think the only reason why it has not been updated so far is because it will require everything and all to move on as backward compatibility would only add more problems over the solutions it would bring.
In what way? As you say, HTTP works. In what way is it deficient, that you see the need for a replacement? Why would anyone use your replacement, if HTTP works?
I’m not so sure that Internet bandwidth will continue to increase, at least to the point that compression will not be required. The technology to do it will probably come about, but whether the economic incentives to do it will be present is something that is not clear to me. Otherwise, we would all have 100Mbps fiber run to our homes (and HDTV would really be HD instead of the overcompressed crap that it is
This has been an interesting discussion nonetheless. I haven’t done much web development, primarily because my targets are a small-single-digit number of OSes on machines that often have access to a central server, and native client development with dynamic languages and GUI toolkits has been easier to do in that environment than learning the web stuff. But I’ve seen some really nice web apps — though they run far better on PC-like hardware than on mobile devices, where I think the real future of web development lies.
I really like your angle on this, coreyography!
The economic (and municipal) limitations are highly relevant factors, and you’re so right regarding the HD issue. Yet with among others VDSL (an improvement over ADSL using the same wires), we can already achieve much more.
Yes it *is* an interesting topic to wander off about. Thanks for the comment, I appreciate it.
Vexi is a thin client that is borne out of the exact frustrations articulated in the introduction to this article.
http://vexi.sourceforge.net/
We have been working hard on it for the past 3 years, it has been in development for 8 years. Sadly the website, documentation and demos do not currently show off the technology in its best light – but now we are on the verge of a major stable release (the first since the project’s inception) that will support for the forseeable future, we are working behind the scenes to overhaul the web face of the project so we can retain more interest.
There are lots of cool facets to Vexi development, and I hope to send an article in to OSNews at some point in the future when we can point to our website with pride – hopefully sometime in September. Until then…
This article sounds an awful lot like some old Sun Java stuff from long ago.
All we need to do is make everything run Java! Then we run Java to compute on the server, Java to display on the client, anyone can run everywhere and everyone can have a pony!
Honestly, this Java dream is a lot more realizable today than it was in 1999, but no one seems to be doing it.
Very true. I came to the same conclusion when thinking about the issues the author expressed.
And I think the reason why this is, can be found in an interesting discussion a few weeks back on OS News. You had your “C++ OOP” camp, your “(O-)C” camp and your “Java” camp.
I think this diversity in approaches is good; it will allow every developer to choose his own approach, which is likely to yield far better outcomes than any unified construction would (especially if that were to be a compromise between the precompiled / semicompiled / scripting camps).
40 comments about the future of thin clients and no one has mentioned Mozilla’s XUL?
( https://developer.mozilla.org/En/Using_Mozilla_code_in_other_project… )
It was created out of exactly the sort of frustration of trying to use HTML to develop rich GUI applications, BUT it doesn’t throw out the baby with the bathwater. You can deliver XUL via HTTP just like HTML–in fact you can include HTML inside XUL and vice-versa. You can use Javascript–and with signed applications, you have access to lower-level capabilities such as sockets and file operations.
HTML was designed as a document presentation language. That was the whole point, since it is just a subset of SGML. While it excels at that, it was never intended as a GUI application language, which is why we have to do all sorts of contortions with Javascript, DHTML and CSS whereas real GUI application toolkits have things like element bindings, observers, and broadcasters as well as rich GUI elements like collapsible trees, resizable lists and grids, etc… XUL has all these and more, and lets you specify them as easily as you do HTML, and because of the overall design, often requires a LOT less Javascript to handle interactions, compared to how one would solve the problem with HTML.
The fact that the same browser can handle XUL and HTML makes it a perfect marriage, because you can still use HTML whenever it is appropriate, without any preamble or boilerplate.
For the life of me I can’t understand why more companies aren’t using XUL for thin client apps. It is completely cross-platform approach, requiring only Mozilla Firefox, Seamonkey, or just the special stripped-down interpreter called XULRunner.
Edited 2009-08-11 05:19 UTC
Arggh… why can’t I edit titles?
Read that as “There is a new thin client–it is called XUL.”
I don’t think reinventing thin client is a good idea right now. As far as I get it, the web apps now migrate from the state of early childhood to simply childhood, and they still have teenage, adolescence and maturity on their way.
XHTML+CSS+JS technology isn’t even stable yet, so it can’t be a solid ground for a real analysis needed to create a really innovating *NEW* thin client technology.
Does HTML5 solves many of the problems posed today ?
In any case perhaps a new language that adhere to current standard yet embrace new technologies to make it advance to the next evolution of the web
In the meantime for a thin client experience we still need RDP or Citrix ICA protocol
http://www.aikotech.com/thinserver.htm
Please stop using OSNews as a free advertising platform for your thin client/server product which you link in most of your posts. (Yes, it’s noticeable.)
And for the record, speaking as an admin, if you can avoid RDP and ICA you are really better off. There are other, better ways to do thin client which I will not advertise here.
Edited 2009-08-11 10:08 UTC
In fractal vector space, absolute size and position are irrelevant – what matters is relative position, relative size, and relative rotation.
Vector widgets in a (zoomable, rotatable, etc..) 2D or 3D vector space transcend certain issues that result from pixel-based layouts, such as dependence on resolution and device features.