Google have created a new HTTP-based protocol “SPDY” (pronounced “Speedy”) to solve the problem of the client-server latency with HTTP. “We want to continue building on the web’s tradition of experimentation and optimization, to further support the evolution of websites and browsers. So over the last few months, a few of us here at Google have been experimenting with new ways for web browsers and servers to speak to each other, resulting in a prototype web server and Google Chrome client with SPDY support.”
As a web developer, I can tell you that HTTP-requests are painfully slow and any decent front end developer optimises the content to use as few requests as possible, and to combine as many resources as possible into one request. The reason why it’s so slow? Firstly, the Request (the ping to ask what resource you want from the server) and Response (the header sent back to state the file type, size and caching information) headers are uncompressed. This is the first thing SPDY rectifies:
Header compression resulted in an ~88% reduction in the size of request headers and an ~85% reduction in the size of response headers. On the lower-bandwidth DSL link, in which the upload link is only 375 Mbps, request header compression in particular, led to significant page load time improvements for certain sites (i.e. those that issued large number of resource requests). We found a reduction of 45 – 1142 ms in page load time simply due to header compression.
Google describe the basic principle of how SPDY works in one line:
SPDY adds a session layer atop of SSL that allows for multiple concurrent, interleaved streams over a single TCP connection.
The use of SSL gives them safe passage through proxies and legacy network hardware, as well as increasing security for all users of the web—this is most welcome giving what some backwards countries are planning to do.
SPDY multiplexes the resource requests, increasing overall throughput. Fewer costly TCP connections need to be made. Whilst HTTP-Pipelining can allow more than one request per TCP connection, it’s limited to FIFO (and thus can be held up by one request), and has proven difficult to deploy—no browser ships with support enabled (The FasterFox add on for Firefox does enable HTTP-Pipelining, but at the cost of compatibility with some servers)
Results have been good, Google’s goal is a 50% increase in speed. Under lab conditions “The results show a speedup over HTTP of 27% – 60% in page load time over plain TCP (without SSL), and 39% – 55% over SSL.”
An interesting feature of SPDY is the ability for the server to push to the client. At the moment the server cannot communicate with the browser unless a request is made. Push is useful, because it would allow web apps to receive a notification the instant something happens, such as mail arriving, rather than having to poll the server at an interval, which is very costly. AJAX apps like GMail and Wave currently use a faux-push hack, whereby an HTTP-Request is left open by the server (it never hangs up on the browser) keeping the AJAX in a suspended state whereby the server can add information to the end of this hanging request and the browser receives it immediately. SPDY will allow for much greater flexibility with server push, and bring web apps that bit closer to the desktop.
Google are quick to stress the experimental nature of SPDY, all existing work has been done under lab conditions, and they are uncertain how it will perform “in the real worldâ€. There are also a number of questions still be answered about packet loss and deployment (compatibility with existing network equipment is key to adoption, especially when you have such awful network operators like AOL in the game).
All in all, Google are not looking to outright replace good ol’ HTTP, rather to augment it with new capabilities that compliment its purpose to serve you content. I’m glad that Google are willing to question even such a well established corner stone of the Internet as HTTP—if we don’t ask questions then we will never discover better ways of doing something. Google are really pushing every field of Internet engineering to get this big mess we call home moving in a positive direction—they can hardly expect Internet Explorer to be waving the banner of progress after all.
Google is REALLY interested in take over the entire internet!
With open standards and open source software that anybody can implement and use!
Can anyone implement and use just a small amount of their annual income on the same openness? They are doing business, not charity.
Doing it the way they are doing it now, I can see no downside at all.
Doing it the Microsoft way is what I despise.
In the page, they are again asking for community work on this. They have reached a point where ……..
Well, that’s more or less how open-source works, isn’t it? A core team builds the basics of something, then looks for collaborators to help it grow?
For something as fundamental as the HTTP protocol, Google certainly can’t do everything themselves – they need people who make web servers, and web browsers, and HTTP and web services libraries to pick up what they’ve done, and incorporate it into their own projects…
So you’d rather that Google closed the code and released it as part of a proprietary product instead? Interesting approach to open source advocacy.
Or even worse, completing the product in-house without any feedback and then releasing it as open source.
Header compression is something that I can certainly see being useful. Web apps using AJAXy techniques, web services – they’re characterized by having relatively little content, meaning the uncompressed headers could often be half the traffic being transmitted.
I see one downside to this. With only one TCP connection loosing a packet will pause the transmission of ALL resources till the the lost packet is retransmitted. Because of the way the TCP/IP congestion avoidance works (increase speed till you start loosing packets) – this will not be a rare occurrence. There are two ways around this – use multiple TCP streams or better – use UDP.
Edited 2009-11-12 20:33 UTC
…but the resources to be retransmitted are also now smaller and more efficient, helping to negate it. So, if it becomes a problem, do a little reconfiguration, and change default recommendations on new pieces of network infrastructure. The networks will adapt, if it’s a problem.
If it ends up working out, it can be worked into browsers and web servers all over, and many of us can benefit. Those who don’t benefit can safely ignore it, if it’s implemented well. We all win. Yay.
The Real Problem we have is that old protocols have proven themselves extensible and robust. But, those protocols weren’t designed to do what we’re doing with them. So, if you can extend them again, wrap them in something, etc., you can gain 90% of the benefits of a superior protocol, but with easy drop-down for “legacy” systems, and easy routing through “legacy” systems. This is generally a win, when starting from proven-good tech, even if it adds layers of complexity.
oh… yes… lets use UDP so we can get half a webpage… a corrupted ssl session, non-working or wrong working sites.
Yes… UDP is the solution to everyone’s problems.
oh wait no… its not because it is a mindless protocol that does not care of something important is lost or if it is wasting its time sending the data to the other end.
You can implement detection and retransmission of lost packets on top of UDP. The problem is the in-order transmission of packets in TCP. Because of it when you loose a packet all received packets will wait till the lost is retransmitted. With UDP you can use the data in the new packets right away, no matter if an older packet is missing and has to be retransmitted.
Imagine the situation where you are loading many images on a page simultaneously, a packet is lost and because only one TCP connection is used – all images stall till the lost packet is retransmitted.
Edited 2009-11-13 17:31 UTC
Thanks for playing but you have no idea what you are talking about.
“On the lower-bandwidth DSL link, in which the upload link is only 375 Mbps“
WOW. Who cares about header compression when you’ve got 375 Mbps!
Edited 2009-11-12 20:39 UTC
HTTP headers are huge. Paylods are huge. 375 Mbps is not enough when you have dozens of largish HTTP requests flying over the wire. Remember, that’s mega BITS, not mega BYTES, and that’s just a measure of bandwidth, not latency. Also keep in mind that as soon as a request gets larger than a single packet/frame, performance can quickly tank. If the compression keeps the entire request in under the MTU, you can get huge latency reductions.
actually that’s a typo, even in bits it would be huge
the real value is 375 Kbps as you can see in this page
http://dev.chromium.org/spdy/spdy-whitepaper
It was my understanding that SSL compressed the stream as a side effect of encryption, and that headers are within the encrypted stream – so if they are using SSL exclusively, why would you need to compress headers?
Please drop what you’re doing, sacrifice your personal time, and give free resources so that Google can make their next several billion dollars. (Suckers)
Summer of Google works the other way around.
How so? It just looks like yet more of Google pimping students for its own purposes…
Please enlighten us – what are those purposes?
For bonus points, explain the SoC students working on Haiku, FreeBSD, and other OSes and projects google has never used.
Mindshare: They gain a bunch of Google seminary students who will likely support and user Google’s platform in the future, and simultaneously deny their competitors (Microsoft, Yahoo, etc) from gaining mindshare with the next generation of devs.
Edited 2009-11-12 23:44 UTC
It’s quite interesting then to see that one of the funded projects is Mono.
Does Google gain mindshare? Sure, from those that get into the program, but that’s about it. Funding a project and gaining mindshare always go together.
People get paid to work on projects not related with Google, so Google doesn’t get to take advantage of any of that (not any more than any other company).
As far as I know no one is stopping Microsoft or Yahoo from applying as a mentoring organization and getting Google sponsored coders for free.
The picture you try to paint suggests this bunch of students has the CHOICE to “support and use Google’s platform in the future”. They are students; I assume they can think for themselves and understand what marketing means.
I don’t see any problem with Google’s SoC and I am happy that a big company like Google tries to be open in at least some way and shares a lot of it’s research and code. Even if it is their only motive to make a lot of money or to market itself: this is the case for every company. With Google we at least get something in return….
Anybody who spends any time helping Google become more successful — without being compensated — is a moron. If Google cares enough about this project, they will FUND it.
Edited 2009-11-13 01:17 UTC
I guess I do not get why people clamor for closed source software be opened… like they have done with JAVA in the past…
Open it only when it is ready?
Google is not going to be more successfull just because of a faster implementation of HTTP. Every internet user would benefit from a faster WWW though, and anybody contributing to such a goal, paid or unpaid, successful or unsuccessful, deserves credit and respect.
How does paying students to work on projects like bzflag or scummvm fit with Google’s “own purposes”? What would those purposes be?
http://www.osnews.com/thread?394382
Clearly Google employees are bored and in need of better games on their so-called “workstations” What could be better than a nice game of “Monkey Island” or BZflag?
Alternatively you can spend your time rooting for a commercial company that cares nothing about you, do their QA work for them and not get paid for that either while they’re making billions.
So, after ripping all the features for Chrome off from their competitors and offering none new, now they want to copy paste Opera Unite into their new client-server HTTPish protocol. Will Google ever create anything innovative?
what do you mean ‘rip off’ and ‘offering none’? their source is there for everyone to also ‘rip off’ and every webkit change they’ve done has been committed back to webkit, they didn’t fork.
As stupid as it sounds, probably the most noteworthy feature of Google Chrome (and the one that differentiates it the most) is that it puts its tabs above the address bar. Innovative? I wouldn’t say that – it just plain makes more sense that way. But it certainly wasn’t ripped off from anyone else.
As for Opera Unite??? What in the hell are you talking about? This SPDY stuff isn’t even remotely related to that, and I mean not even r e m o t e l y.
[I’ll probably be voted negative for this but who cares!]
Chrome = Speed Dial, Top-tabs, bookmark syncing, etc..
Opera DOES have all these before Chrome. The first two were introduced in Opera before anywhere else, while I believe Opera was the first browser to integrate bookmark sync.
SPDY is more like Opera Turbo on the other hand, which compresses HTTP streams but also reduces quality of images.
Hell, even GMail wasn’t the first 1GB mail — I remember this mac fansite (spymac? it’s a strange site now) who offered 1GB free email before Google did.
Opera is often innovative but doesn’t put much energy into refining it. Google on the other hand, waxes and polishes it and makes it shiny for the user.
[rant]And seriously; it being open source doesn’t automatically mean that any business can and will adopt it.. It’s better, sure.. but that doesn’t stop their world domination ^_^[/rant]
Opera Turbo is a proxy. If you don’t mind your data being routed through Europe and heavily compressed beyond recognition.
SPDY is _not_ a feature in some web browser–it is a communications standard that anybody could implement in any browser. They have created a test version in Chrome, but Mozilla could just as well implement it too.
Both of the technologies do the same thing — compress webpages. One does it via a proxy, the other does it through protocol implementation. And a proxy is much easier to integrate as compared to a wholly new standard. Unless you have something racially against europe, if it sends me my pages faster I have no issues. Images? Yes! It’s for viewing web pages faster on slow dialups. That’s the exact intent. So other than your personal bias against opera, there’s not much else different.
To sum it up, both of them do *exactly* the same thing – compress web pages. One does it via a proxy, the other is a wholly new standard. Now read the part where I said, Opera innovates and Google polishes it.
Way to jump to conclusions. Depending on where you are, re-routing through Europe could make things slower.
if it sends me my pages faster
See that part there? Opera makes sure if the pages you get are faster with Turbo on. Else, it warns you and disables itself.
Edited 2009-11-13 13:30 UTC
I do have concerns with increasing displays of racism in Europe ( and other places as well) , if that’s what you meant. But, I’d just prefer not to MITM myself out of paranoia.
World domination by open source software is no problem, because bad behavior by such an open source project immediately leads to forks. Just look at waht happened to XFree86: They got forked by Xorg the second they started behaving funny (closed license).
I do not get, why people don’t seem to grasp the difference between world domination by a closed source entity vs. world domination by an open source entity.
It is as different as night and day.
I think Email as it exists also carries some painful legacy decisions – although I don’t know which is harder to ditch: http or smtp?
HTTP: it would be nice to have a new protocol like SPDY, but stop and think about how many services and applications were designed with only HTTP in mind.. it hurts. Browsers change every few months, not enterprise-level applications. If anything, SPDY could be at least be used as an auxiliary or complementary browser data pipeline. But calls to replace HTTP mostly come from performance issues, not catastrophic design flaws (enter SMTP)..
SMTP: the fact that you’re expected to have an inbox of gargantuan capacity so every idiot in the world can send you pill offers to make your d!@k bigger is as stupid as taking pills to make your d!@k bigger. As it exists today, any trained beagle can spam millions of people and disappear with no recourse. Terabytes of “Viva Viagra!” is due to the simple fact that the sender is not liable for the storage of the message – you are, you sucker. If the message is of any actual importance, the sending server should be available for the recipient to retrieve when they decide to. This provides many improvements over SMTP such as:
1) confirmation of delivery
— you know if it was accessed and when – The occasional ‘send message receipt’ confirmation some current email clients provide you with is flaky and can easily be circumvented – this could not be.
2) authenticity
— you have to be able to find them to get the message, they can’t just disappear. geographic info could also be used to identify real people (do you personally know anyone in Nigeria? probably not…)
3) actual security
— you send them a key, they retrieve and decode the message from your server.
4) no attachment limits
— meaning, no more maximum attachment size because you’re retrieving the files directly from the sender’s ‘outbox’. “please email me that 2.2GB file” OK! now you can! Once they’ve retrieved it, the sender can clear it from their outbox – OR, send many people the same file from ONE copy instead of creating a duplicate for each recipient. This saves time, resources, and energy (aka $$$)!
5) the protocols and standards already exist
— sftp and pgp would be just fine, a simple notification protocol (perhaps SMTP itself) would send you a validated header (sender, recipient, key, download location, etc) which you could choose to view or not.
You’ll still get emails, but spammers will be easily identified because their location (and perhaps an authenticity stamp) will point to the server – if not, you can’t get the message even if you wanted to. And again – if it’s so damned important I know senders will be happy to hold the message till recipients pick it up…? right?
But we’re talking about HTTP here, which I can say isn’t quite as broken. Although they should keep working on SPDY, because give it a few years and the world will find a way to break it…
Edited 2009-11-12 22:21 UTC
Google wave takes care of that!!!
Seriously though… google wave… when comparing it to gmail, could replace gmail if the wave protocol was available for everyone to join in on. It actualy COULD replace e-mail and do a sweet job at it.
Graylisting.
Internet mail 2000:
http://en.wikipedia.org/wiki/Internet_Mail_2000
Good link, but it kind of makes me think this architecture will never get adopted. Nine years after it’s namesake and I sure as heck never heard of it (although it does exactly what I was looking for). But there lies the problem, how do you enable it’s adoption on a widespread basis, without breaking compatibility, and without locking into a vendor’s service? Google Wave, innocent as it is – it’s still a provided service delivered by a company. I’m looking for an architectural change (like I.M.2000) that could be adopted transparently, perhaps we’ll have to wait till email’s completely unusable for it to really change…?
1) confirmation of delivery
— you know if it was accessed and when – The occasional ‘send message receipt’ confirmation some current email clients provide you with is flaky and can easily be circumvented – this could not be.
And that’s a wanted feature?
You must be one of those marketing guys!
Who is ‘Goolge’ and why haven’t we hard of them before now?
–bornagainpenguin
Google’s evil twin. Their motto is “Do No Good”. They’re a closed and proprietary company constantly seeking to usurp the web with their own proprietary technologies and patents—a bit like Microsoft, you could say!
This has been developed by all major OS vendors, Apache, W3C and other projects.
Google spits on it, calls it a new project with a lame name and suddenly it’s gold?
Get real.
Wake me when Apache 3.0 becomes reality and the overhead they will rip out for that projects becomes consumable.
That alone will drop a major amount of delay on interactions between the client/server model.
Hey – did you Photoshop your teeth to be so gleaming white?
In a way I applaud the idea of addressing latency. Handshaking, the process of requesting a file is one of the biggest bottlenecks remaining on the internet that can make even the fastest connections seem slow.
To slightly restate and correct what Kroc said, every time you request a file it takes the equivalent of two (or more!) pings to/from the server before you even start receiving data. Real world that’s 200-400ms if you have what’s considered a low latency connection, and if you are making a lot of hops between point A and B or worse, have connections like dialup, satellite or are just connecting to a server overtaxed on requests – that could be up to one SECOND per file, regardless of how fast the throughput of your connection is.
Most browsers try to alleviate this by doing multiple concurrent connections to each server – the usual default is eight. Since the filesizes are different there is also some overlap over those eight connections, but if the server is overtaxed many of those could be rejected and the browser have to wait. As a rule of thumb the best way to estimate the overhead is to subtract eight, reduce to 75%, and multiply by 200ms as the low and one second as the high.
Take the home page of OSNews for example – 5 documents, 26 images, 2 objects, 17 scripts (what the?!? Lemme guess, jquery ****otry?) and one stylesheet… That’s 51 files, so (51-8)*0.75==32.25, we’ll round down to 32. 32*200 = 6.4 seconds overhead on first load on a good day, or 32 seconds on a bad day. (subsequent pages will be faster due to caching)
So these types of optimizations are a good idea… BUT
More of the blame goes in the lap of web developers many of whom frankly are blissfully unaware of this situation, don’t give a **** about it, or are just sleazing out websites any old way. Even more blame goes on the recent spate of ‘jquery can solve anything’ asshattery and the embracing of other scripting and CSS frameworks that do NOT make pages simpler, leaner, or easier to maintain even when they claim to. Jquery, Mootools, YUI, Grid960 – Complete rubbish that bloat out pages, make them HARDER to maintain than if you just took the time to learn to do them PROPERLY, and often defeat the point of even using scripting or CSS in the first place. CSS frameworks are the worst offenders on that, encouraging the use of presentational classes and non-semantic tags – at which point you are using CSS why?
I’m going to use OSNews as an example – no offense, but fair is fair and the majority of websites have these types of issues.
First we have the 26 images – for WHAT? Well, a lot of them are just the little .gif icons. Since they are not actually content images and break up CSS off styling badly, I’d move them into the CSS and use what’s called a sliding-background or sprite system reducing about fifteen of those images to a single file. (In fact it would reduce some 40 or so images to a single file). This file would probably be smaller than the current files combined size since things like the palette would be shared and you may see better encoding runs. Researching some of the other images and about 22 of those 26 images should probably only be one or two images total. Let’s say two, so that’s 20 handshakes removed, aka three to fifteen seconds shaved off firstload.
On the 12 scripts about half of them are the advertising (wow, there’s advertising here? Sorry, Opera user, I don’t see them!) so there’s not much optimization to be done there EXCEPT, it’s five or six separate adverts. If people aren’t clicking on one, they aren’t gonna click on SIX.
But, the rest of the scripts? First, take my advice and swing a giant axe at that jquery nonsense. If you are blowing 19k compressed (54k uncompressed) on a scripting library before you even do anything USEFUL with it, you are probably ****ing up. Google analytics? What, you don’t have webalizer installed? 90% of the same information can be gleaned from your server logs, and the rest isn’t so important you should be slowing the page load to a crawl with an extra off-server request and 23k of scripting! There’s a ****load of ‘scripting for nothing’ in there. Hell, apart from the adverts the only thing I see on the entire site that warrants the use of javascript is the characters left counter on the post page! (Lemme guess, bought into that ajax for reducing bandwidth asshattery?) – Be wary of ‘gee ain’t it neat’ bullshit.
… and on top of all that you come to the file sizes. 209k compressed/347k uncompressed is probably TWICE as large as the home page needs to be, especially when you’ve got 23k of CSS. 61k of markup (served as 15k compressed) for only 13k of content with no content images (they’re all presentational), most of that content being flat text is a sure sign that the markup is probably fat bloated poorly written rubbish – likely more of 1997 to it than 2009 – no offense, I still love the site even with it’s poorly thought out fixed metric fonts and fixed width layout – that I override with opera user.js.
You peek under the hood and it becomes fairly obvious where the markup bloat is. ID on body (since a document can only have one body what the **** are you using an ID for), unnecessary spans inside the legend, unnecessary name on the h1 (you need to point to top, you’ve got #header RIGHT before it!), nesting a OL inside a UL for no good reason (for a dropdown menu I’ve never seen – lemme guess, scripted and doesn’t work in Opera?), unneccessary wrapping div around the menu and the side section (which honestly I don’t think should be a separate UL), those stupid bloated AJAX tabs with no scripting off degradation, or the sidebar lists doped to the gills with unnecessary spans and classes. Just as George Carlin said “Not every ejaculation deserves a name” not every element needs a class.
Using MODERN coding techniques and axing a bunch of code that isn’t actually doing anything, it should be possible to reduce the total filesizes to about half what it is now, and eliminate almost 75% of the file requests in the process… Quadrupling the load speed of the site (and similarly easing the burden on the server!)
So really, do we need a new technology, or do we need better education on how to write a website and less “gee ain’t it neat” bullshit? (Like scripting for nothing or using AJAX to “speed things up by doing the exact opposite”)
Edited 2009-11-13 09:27 UTC
I think what pisses me off the most is the fact that I’ve made websites, I want for example geometric shapes but I can’t do it without having to use a weird combination of CSS and gif files. Why can’t the W3C add some even most basic features which would allow one to get rid of large amounts of crap. Heck, if they had a geometric tag which allowed me to create a box with curved corners I wouldn’t need to use the frankenstein code I use today.
What would be so hard to create:
<shape type=”quad” fill-color=”#000000″ corners=”curved />
Or something like that. There are many things that people add to CSS that shouldn’t need to be there if the W3C got their act together – where the W3C members have done nothing to improve the current situation in the last 5 years except to drag their feet on every single advancement put forward – because some jerk off in a mobile phone company can’t be figged upping the specifications in their products to handle the new features. Believe me, I’ve seen the conversations and it is amazing how features are being held up because of a few nosy wankers holding sway in the meetings.
“SVG 1.0 became a W3C Recommendation on September 4, 2001” — Wikipedia.
While it’s hardly simple, SVG was actually intended for exactly this kind of thing. The problem is that only Webkit allows you to use SVG anywhere you’d use an image.
Gecko and Opera allow you to use SVG for the contents of an element only. Internet Explorer doesn’t support SVG at all, but allows VML (an ancestor of SVG) to be used in the same way you can use SVG in Gecko and Opera.
So the functionality is there (in the standards) and has been there since 2001. We just aren’t able to use it unless we only want to support one browser. Cool if you’re writing an iPhone application, but frustrating otherwise.
As for your specific example, you can do that with CSS, using border-radius. Something like this:
-moz-border-radius: 10px;
-webkit-border-radius: 10px;
border-radius: 10px;
Of course, as with everything added to CSS or HTML since 1999, it doesn’t work in Internet Explorer.
Blaming the W3C for everything hardly seems fair, considering that these specs were published almost a decade ago, and remain unimplemented. Besides, there are plenty of other things to blame the W3C for. Not having actually produced any new specs in almost a decade, for example.
.
Edited 2009-11-13 21:15 UTC
I agree absolutely.
Since Adam already spilled the beans in one of the Conversations, I may as well come out and state what is probably already obvious: There is a new site in the works, I’m coding the front end.
_All_ of your concerns will be addressed.
The OSnews front end code is abysmally bad. Slow, bloated and the CSS is a deathtrap to maintain (the back end (all the database stuff) is very good and easily up to the task).
Whilst we may not see eye to eye on HTML5/CSS3, I too am opposed to wasted resources, unnecessary JavaScript and plain crap coding. My own site adheres to those ideals. Let me state clearly that OSn5 will be _better_ than camendesign.com. I may even be able to impress you (though I doubt that )
That there just means you don’t use google analytics (or don’t know how to use it). It is a very powerful peace of software that can’t be replaced by analog, webalizer or digging through logfiles.
No, it’s just that the extra handful of minor bits of information it presents is only of use to people obsessing on tracking instead of concentrating on building content of value – usually making such information only of REAL use to the asshats building websites who’s sole purpose is click-through advertising bullshit or are participating in glorified marketing scams like affiliate programs… such things having all the business legitimacy of Vector Knives or Amway.
Edited 2009-11-14 15:18 UTC
I agree, this isn’t anything new. I remember reading about how this could be done way back in 1999 (the original article author is probably working for Google now).
This should be the W3C’s job, to update web standards and promote the new updated versions. Instead, the W3C works on useless crap like “XML Events”, “Timed Text”, XHTML 2.0 and “Semantic Web” (which is due to reach alpha state some time after the release of Duke Nukem Forever).
Let’s face it, HTTP 1.1 is abandonware, and I think we have to applaud Google for taking the initiative and actually implementing it and trying to put some weight behind the push. On the same token, let’s see Google push more for IPv6 and the ideas suggested by two of the people in the comments for this article 🙂
I thought the comment about “Internet Explorer” not waving any flags was uncalled for.. i think you should direct that hatred towards the slacking company behind the shitty product! Oh, and a bit off-topic aswell.. not really Microsofts fault HTTP is crap?
It’s nobody’s specific fault that HTTP is crap, but then what matters is who is going to do anything about it.
Microsoft have had total dominance of the web for almost a decade. At no point during that time did they attempt to improve the status quo. At no point did they say that “You know, HTTP is slow and could do with improving”. They just coasted along with careless disdain.
Agreed, IE had a lot of time at hand to improve the quality of user’s experience but they didn’t do much about it.
On the other hand, IE *did* indirectly invent AJAX
not a big deal! I have realized the limitations of web and I stay with in those limits.
SPDY.. is called beating around the bush!!