In an effort to further commoditize computing, Sun is now selling computing and storage utilities, modeled after other “grids” like electrical, water, and oil distribution. Instead of paying by the kilowatt-hour or gallon/litre, customers pay by the CPU-hour or the gigabyte-month.
In a way, this is a natural extention of other hosting services, such as web hosting, except that general computational workloads are supported on many processors.
Don’t large DB vendors sometimes charge per transaction?
Is this really much different?
On the plus side, I wouldn’t mind hosting a few static files that way, if it got cheap enough
Unfortunately I don’t think there’s the apps to do this stuff. I’d really like it if 3ds max could utilize CPU-hours when it does my renderings. Of course we could make new apps.. now, to think of interesting things I can do with vast amounts of processing power.. hmm.. it’s just like the 50’s when atomic power was becoming available: create new and interesting ways for people to waste, err, I mean, utilize power.
It seems that charging by time makes less sense than charging by some kind of computational unit (like a teraflop). The more popular it becomes, the less computational power you get for your dollar, until they upgrade the system.
“I’m sure it will suck”
Great attitude.
“Unfortunately I don’t think there’s the apps to do this stuff.”
Sun is basically selling time in a Container on their big server farm. It’d be the regular Solaris environment with metered CPU time, so I would assume that any traditional parallel task that needs many CPUs for, say, a week, is fair game. Instead of paying a few hundred grand for a private cluster, you can rent time as needed (a week on 50 cpus would be $8,400 according to their pricing).
As far as apps go, what shortage of HPC apps is there for UNIX? Does everyone do their CFD on big Windows XP clusters, now?
The more popular it becomes, the less computational power you get for your dollar, until they upgrade the system.
It appears Solaris Containers can guarantee allocations of CPUs, and Sun could surely tell you if there is “no room at the inn” if their CPUs are all reserved (they say they have 13000 of them…). Also, Sun’s bread and butter are ~1GHz UltraSPARCs or ~2GHz Opterons, right now, so you can estimate what FLOPs/$ you get.
You know what a queuing system is?
Sun will be using GridEngine to do the scheduling work.
Gridengine is opensource, and is available for almost all operating systems.
The question is, is the volume there right now to make significant money? This service is complicated for most and they will need to do some careful advertising to get some customers, I’m still very much unsure of sun’s advertising department’s abilities to market this.
Are we going to see a re-invention of the pre personal computing era?
Whatever they charge for (be it straight wall clock time, cycles, teraflops) its interesting to see these re-emerging.
The real questions here (at least for me) are:
1. Security
The problem of hostile code on a utlity computer was never a problem in the 60s. However, rentable cpu cycles presents an interesting target for the cracker community no doubt. If the user cannot have control over the code that is running on the machine then it can be of little use. I wouldn’t consider any mainstream OSs to be capable of providing the kind of least authority access required to do this safely.
2. Emergent Effects
Much has been written about agoric resource allocation schemes (where everything is paid for). Imagine a natural extension of this idea to where everything you use on a grid/network/single pc must be paid for. History has shown the market to be a very useful tool for automatic self correction. The dream of this stuff being applied to resource allocation (“let the system figure itself out, because users will be motivated to get the resources they need at the cheapest price”) on computers has been around for a while. Again, its interesting to see it in the real world in this millenia.
An interesting idea, whose real implications ought to reach much farther than a Sun press release and marketing crud.
“1. Security
The problem of hostile code on a utlity computer was never a problem in the 60s. However, rentable cpu cycles presents an interesting target for the cracker community no doubt.”
From what people have been posting elsewhere, Solaris Containers are supposed to be bullet-proof from a security point of view. Cracking the utility would probably use social engineering as the path of least resistence or other classic attacks (cracking the ISP, instead, for example–thus we have SSH and SSL). This is generally no different than private servers or your data at a hospital or bank. Also, I’m sure Sun would be contractually bound with respect to privacy (they wouldn’t be courting people with large proprietary datasets otherwise).
“2. Emergent Effects”
It is certainly the same classic rental scenario (why buy a back-hoe to dig a foundation for a storage shed?), but the support contracts alone on a private HPC cluster could cost more than the rental fees. Agreed that Sun’s marketing people need to sell this right, though…
> 1) Security
solaris zones should be able to solve it.
Sorry about so many posts in this thread, but I noticed something else buried in the link above: Desktop Utility.
Jonathan talked a bit earlier about it right here in his blog: http://blogs.sun.com/roller/page/jonathan/20040925#1_hr_computing_a… dated Saturday September 25, 2004.
However its not really new. Here’s an earlier initiative which went live about ~ a year ago: http://www.fz-juelich.de/nic/Rechenzeit/Rechenzeit-d.html
End of the financial year, and one requires lots of power to crunch the employee pay numbers for the tax man; instead of purchasing servers that would otherwise sit around idle during the year, a company can have their servers like normal, but if they require more power, such as at the end of the financial year, they just approach SUN or IBM, and ask how much it would cost to borrow some number crunching time on their system.
Yes, this is a rebirth of an old idea, but the idea was good, and should never have been killed off so rapidly as it was. The only *real* winner was Microsoft, who preached the trendy “PC on every desktop”, resulting in millions of PCs that spend 80% of their time idle – which IMHO is a waste of capital spending – when a centralised network with thin clients, coupled with utility computing would ensuring that the system was always getting utilised, and if extra capacity is requird, it can easily obtained, without the problem of having that capacity lay dormant when not in use.
I agree with “tm”, this fact reminds me (no, im not so old, just talkin about history of PCs ) the time were students like Gates or Wozniak had to use Mainframes during the night because the cost of usage was cheaper….
Btw, when will we earn some money with apps like seti@home? I have some cpu cycles that want to be used
Sun seems to control not only the cpu:s (the “electricity”) but also the software (the “electrical appliances”). So the customer cannot choose more efficient software (“low power appliances”) to decrease costs.
If Sun charge per time then they have no incentive to offer efficient software. The more time a customer needs to buy to complete a certain task, the better for Sun.
I would prefer a more task oriented pricing.
“If Sun charge per time then they have no incentive to offer efficient software.”
They say at the website they are providing Solaris 10 and JES on top of an N1 Grid. This is the same software that anyone can buy directly. Since they are also marketing to engineers and scientists (they mention protein folding and crash simullations as examples), this means they probably have to provide their compiler suite (C/C++/Fortran), too.
Probably what people could do is prototype on an SMP workstation running Solaris 10, and, when their code is ready for the big run, they would ship it off to run on the grid. For custom scientific code, it is as efficient as the programmers can make it.
I wonder if their “storage utility” could be used for custom Containers, where customers could install their own software tools beyond what Sun provides. In that case, all Sun provides is the OS and the CPUs to be used however people want.
in the old days at our university people paid for cpu time.
thats when you had one mainframe and terminals connected to it.
if you had a cs class you got some free houers to do your projects but if you wanted more time you had to pay.
Probably what people could do is prototype on an SMP workstation running Solaris 10, and, when their code is ready for the big run, they would ship it off to run on the grid. For custom scientific code, it is as efficient as the programmers can make it.
That’s the key I think. You can build and develop the software on a “cheap” 4-way box to make sure it scales, and then Sun offers up one of their monster boxes for that actual run.
Overall it’s a rather niche market, but this is ye olde “time is money” equation. If I had a process that takes one month of clock time on a 4 way box, and I can “rent” 128 CPUs from Sun and get my result in a day? For $3000? By simply uploading it to their site and filling out a PO?
There’s a lot of benefit there for the user who needs occasional bulk CPU’s. “Supercomputing for the rest of us”.
in the old days at our university people paid for cpu time.
thats when you had one mainframe and terminals connected to it.
In the old days, there was a reason why folks didn’t have computers in their homes. In the old days there was a difference between a VAX Cluster and an Apple ][.
Back In The Day, many large companies had internal Computing Divisions that would sell computer time to other projects. These specialized divisions would maintain the machinery, etc. and offer the machines as a utility to the engineering projects and what not, who would then be billed for their use (and that would then be passed on to the customer).
The idea of “charging” for computing time is the same as charging for bandwidth. It amortizes the cost of the infrastructure and maintenance across the users who use it.
Universities don’t charge any more for commodity computing because it is now so cheap at the low end (or they simply require you to bring your ouw computing).
But the big computers are always going to cost more, need more cooling, power, etc.
Sun has 13000 CPUs. How many tons of air conditioning and how much power is that conuming? And note that it pretty much consumes that whether it’s running useful work or not.
What Sun is doing is its taking this old idea and making it available to the public versus just large companies and university labs. Now, say, you’re a home animated 3D film maker and you want to make your final rendering. Now you don’t have to be Pixar, you can make a low res mock up, and hunt down your investors and say “I just need $30,000 in computer time to make the final print”. ($30,000 is about 3.5 CPU years at Suns rate). “Hi, Sun? Can I book your entire grid for 3 hours? I have a rush job for a premiere Friday…”