Apple today previewed Xgrid, a computational clustering technology from Apple’s Advanced Computation Group (ACG). Xgrid helps scientists and others working in compute intensive environments to fully utilize all IT resources, including desktops and servers, by creating a grid enabled “virtual” IT environment that takes advantage of unused computing capacity to run batch and workload processing.
I can’t help but think iSupercomputer…
Seriously, though, this is a very good example of Apple’s innovation. What other company thinks to make a friendly, lickable front-end to *clustering* technology???
Wow – Apple have a new daemon which sits in the background waiting for TCP/IP requests at a certain point, which when informed to wake up by an XGrid server, accepts an application to execute, and reports results to the central server. Yawn. I’ve set up something similar here at work years ago to run mathematical simulations. Yes, it can be done on any platform ever made.
Massive potential for security abuse – hope Apple have analysed vulnerabilities properly.
Um… the point of xgrid is that you don’t have to configure anything… oh wait, that’s what you did, right?
Hahaha so true.
Also I figure there is security to stop that. But I am too lazy to read up on it right now.
Your ignorance truly disgusts me. I’m in the process of deploying the second cluster our group uses for the Regional Atmospheric Modelling System (http://atmet.com) and to me, especially in the light of our recent cluster deployment, XGrid looks like the most wonderful cluster management tool I’ve ever seen. Please read:
http://www.apple.com/acg/xgrid/
XGrid makes for the easiest cluster deployment I can possibly imagine. Being Rendezvous enabled, it can automatically detect what resources are available or in use by other users, as opposed to existing GUI management software which has to be configured manually.
Honestly, what knowledge of clusters do you have? Do you know the syntax for invoking an MPI job via mpirun? Can you even name a specific cluster management tool with similar features to XGrid?
I hate trolls…
“Massive potential for security abuse – hope Apple have analysed vulnerabilities properly.”
I imagine most people would just use dedicated computers for their Xgrid instead of sucking some cycles of the secretaries iMac. So then there wouldn’t be an security issue accept from people sitting at it.
Well, if you have at least some brain tissue, you’d not allow any connections outside of the network (here comes the manual configuration) and if you are an idiot who connects such *supercomputer* to an outside network, be it another LAN or an Internet, then you deserve to be hacked and shot. Not I have very little experience with clustering but this sounds good. However, as a good Unix purist I must bash the whole no-configuration-easy-of-use thing. Bah, stupid Apple (that was mendatory)
I applaud Apple for their efforts in this space. I think for users who have a homogenous backend of OS X’s, this grid will be enticing. However, for users with a heterogeneous backend, the problems are obvious. I’m not entirely clear as to what the actual components of Xgrid are, but the Rendezvous portion could be ported since it’s an open standard. I think what would be interesting is to see how well the Globus Toolkit performs against Xgrid. Don’t know if the comparison is an apples and oranges comparison. With an emerging and open grid standard already formed, I’m curious to know whether Apple considered building a front-end to this standard.
Any thoughts from people here?
Sirs:
I have been working with clustering Apples for 7+ years. Backrounder for Infini-D, Renderman Pro, Dreamnet for RayDream Designer and Extreme 3D.
Extreme 3d was the simplest to set up, and it was free.
Put an alias in the startup folder and reboot. it configured itself. ( actually the 1st copy cost about $100)
Lately I have been playing with distcc. That was even simpler. No configuring nessesary. The clients boot up, and start looking for a server… they wait until I need to compile a new kernel, and ZOOM. 100% network saturation, until its done. even with node crashes, its fast, robust and makes use of all those dusty Pentium IIs lying around.
Now all I have to do is get it running on the SGI Indigo2, and that should cut the compile times again by about 50%.
Total cost of equipemnt: Nothing more than a Home PC.
Total cost of ownership? All told, with the 2 64 port switches ( $200/per ) about 4% of a Xcluster, and about 2x as fast. I cant wait to see these Xclusters stacked up in garbage cans.
The “supercomputer” that is at VT cannot load a dataset any faster than any other cluster on the top 500. its all smoke and mirrors boys.
The point is not to buy new cluser hardware but to use existing machines.
You are full of it and don’t even know what you are talking about.
Turbolinux used to sell a product that did exactly this…across multiple platforms: Windows, Linux on several platforms, and Solaris.
I’m sorry but Xgrid is a big yawn; it’s nice that apple is actively supporting the technology on OSX but this technology has been available on Linux for quite some time.
Found it…the product is called Enfuzion developed by Axceleon. Been around for awhile.
I know places that use it and really like its capability.
It is Xgrid is almost exactly Enfuzion for OSX…wouldn’t be suprised if apple licensed some of Enfusion’s code base.
However, Apple’s decision to support or not to is contingent upon whether they will sell more machines by supporting it. If they find that there is a high demand to add a couple of G5s to a pre-existing cluster, they’ll bite, otherwise, they’ll push very hard to have new supercomputers be purely mac-based.
If only their goal was to be the most uber-cool geek company, instead of earning money for their shareholders. ::sigh::
I just downloaded the preview, now for REAL my iBook 600 will own your n00bx0r windows box. All your Photoshop supercomputer are belong to me.
You really need to look at this from the scientific computing standpoint. Scientific computing is the primary consumer of HPC clusters, and thus Apple’s primary market with any cluster product.
Total cost of ownership? All told, with the 2 64 port switches ( $200/per ) about 4% of a Xcluster, and about 2x as fast. I cant wait to see these Xclusters stacked up in garbage cans.
Let me introduce you to the wonderful world of educational purchasing (the majority of scientific computing takes place at institutions of higher education). As an educational institution there are three ways we can go with any large purchase:
Tier 1 – No overhead, and no need to put the purchase out on bid. These are companies that everyone trusts, so we worry much less about under-the-table deals and corruption.
Bargain vendor – No overhead, but if you don’t want to go tier 1 you have to put the purchase out on bid. The order is given to the lowest bidder, which means that you will be dealing with the bottom of the barrel. The rationale is… better a headache for government employees dealing with a lousy company than allowing government agencies to delegate who they will purchase from.
Parts – Purchasing a cluster in parts instead of as a single unit incurs 50% overhead, mitigating the cost advantages and coming with a major disadvantage… no service contracts, only manufacturer warranties.
We purchased our latest cluster for approximately $8000, which consists of 8 nodes with dual 2GHz Athlon MPs, and a master node with 1.4TB of RAID-5 storage. Even with the educational discount, this would only cover 3 of the “Cluster Node” G5 XServes. However, purchasing this cluster has been a multimonth affair as the order was constantly bungled. Almost 6 months since we began the purchasing process we still do not have all the parts.
XGrid would be an excellent seller for us, as we’re looking at moving to Apple workstations. We use primarily MPI applications, and currently don’t use any cluster management software. As more users begin to want access to the cluster, the need for cluster management software has become painfully more clear. We’ve looked at various free software products (e.g. http://www.opensce.org/) and haven’t been happy with any of them. XGrid would provide easy interoperability with MPI applications and significantly ease managmement of the cluster’s resources, and would significantly ease the process cluster deployment. While we would look at all the tier 1 vendors, namely Dell for Xeons or IBM for Opterons, Apple certainly has an attractive offering with the G5 XServe.
The neat thing about mosix (http://www.openmosix.org) is that it is able to distribute normal apps, rather than apps that know they are being distributed. It makes your linux cluster seem like a single massive SMP machine to each and every user.
I guess for apple users having the ability to do ‘distributed computing’ is an exciting concept….especially when they have not been exposed to any other OS.
The neat thing about mosix (http://www.openmosix.org) is that it is able to distribute normal apps, rather than apps that know they are being distributed. It makes your linux cluster seem like a single massive SMP machine to each and every user.
The main drawback of MOSIX is that its migration algorithm is not tuned in such a way that MPI processes migrate, even in the more recent versions that allow applications which use sockets to migrate, and when all SVR4 features are disabled and all IPC is done with sockets only. I didn’t play around with it for long (I believe I got a dnetc process to migrate)… to even use MOSIX I had to write a custom script to be invoked by MPICH to create worker processes on the machine… it was simply easier to configure the cluster for canonical remote execution via rsh (the whole thing is firewalled, of course)
The MOSIX filesystem is a very handy way to distribute files amoung the nodes of a cluster, but that has little use when your worker nodes are diskless and operating via root-on-NFS.
yeap, I guess a MPI process is something that expects to understand that it is distributed.
I’ve used mosix (before the openmosix days; is migratable sockets now ready? cool!) so that I could tap into the idle cycles in the typing pool for my big compiles. Of course make was splitting processes to compile subparts, but all the same, it didn’t really understand that it was being distributed.
Obviously word-processors etc aren’t as likely to migrate as something compute-intensive, so there is such a thing as being ‘distributable ripe’ yet not ‘understanding’ that it is being distributed.
Is XGrid about migration of generic processes, or is it a deliberate clustering system for programs specifically designed to be clustered? I guess I am showing that I don’t own any macs nor have I read the article 😉
To my uneducated eyes it seems that clustering or is a way to use those PCs collecting dust or something serious with some real money involved.
If you need to do serious calculations probably those 30 PII in a lab would not be a really good performing cluster.
If you want to do serious stuff you need a cluster with powerful nodes with no more then 2X (more seems to be not so efficient, you will need too much RAM ) with reasonable price.
considering the good performance of G5 and this new XGrid product available Apple is doing a nice job I guess.
I find it pointles to compare a 10 nodes dual G5 to a lab full of old PCs.
Probably there is the hobby cluster (wow I-ve compiled the new kernel faster or a movie in a divx) and some more serious stuff.
It is not so easy to scale up nicely with old crappy HW maybe connected with not so good network and so on.
I am personally happy to see the clustering world improving day by day and now also Apple in this world. why not ?
Anyhow I am quite not so sure of the incredible performances that apple is caliming for the G5 .. it is for sure damn good … but so much better then opteron?
Everyone belongs to a religion .. in computers too .. so no flames, thanks
We purchased our latest cluster for approximately $8000, which consists of 8 nodes with dual 2GHz Athlon MPs, and a master node with 1.4TB of RAID-5 storage.
I want to know where you got all that for only $8,000. Heck we just got at 1.4TB RAID-0 (I think – Striping – I always get 1 and 0 mixed up) for $5,000. So you managed to get that and 8 dual 2.0GHz Athlons for only $3,000 dollars more. I really want to get that.
Well, I’m guessing they are putting all the stuff together themeselves. I know the controler for a raid 5 setup is isn’t cheap, but HD’s are. 1.4TB isn’t that big anymore. I don’t know is there are redundent drives in a raid 5 setup but hitting good price points you can do 1.4TB with 10hd’s for a bit over a grand, and a good 500 bucks for the controller, so they could probably do it for under 2 grand.
6 grand left for 8 dual athlon boxes. thats not to hard at all if your building it. Even easier if they used athlon XP’s that can run as MP though AMD doesn’t certify them for this.
It unlikely they bought off the shelf boxes for this. they probabaly bargin hunted and hit all the price points and maybe even got some discounts along the way. If your a university money is tight or rather you are sure not to waste it plus you have those handy little grad students to put to work.
ha ha…your funny…you think a Grid of computers is the same thing as a cluster….thats so cute.
This is not Apple trying to look cool. Xgrid is another in a line of products Apple has developed to reduce the overhead of computing. Sure lots of you on this board can put together some form of a shared processing network. But guess what, you are all overhead. Being overhead your boss would love to get rid of you. If Apple’s product is successful it will reduce the need for administrators and technicians supporting shared processing computing systems.
Let’s say it reduces the need by one FTE (full time equivalent). The last time I got a quote for a high level Unix support tech is was about $60.00 per hour. That is for a supplemental employee, but lets assume that it reflects the fully loaded costs for the FTE. That works out to $124,800.00 per year in avoided overhead.
Now, let’s assume that I am running a company that truly does computation intensive work, and the more computing power I have the better. How much more computing power can I buy from Apple for the $124,800 I just saved? I get a further win because the hardware I buy is capital that I can depreciate.
Now think about this in terms of a small to medium size engineering company or a post production company. More computing power could mean more output/month and therefore more revenue. Reduced overhead means they get to keep more of that revenue. That can mean the difference between profit and loss on some contracts.
Hey, stop with the insults and wake up a minute. What Apple have done is nothing revolutionary, virtually any competant engineer can whip up a system similar to XGrid in less than a week. Heck, back when I was still a junior for the company I now work for, I set up a series of distributed processing tests, and the entire front/backend was created in one afternoon. The system works such that there is a small daemon running on every PC, which listens to requests on a specific port. Requests are sent from a server, which instructs each PC to accept and execute an executable. The nature of the executable can vary, but today they receive processes to perform mathematical simulations in a certain range, and report results back to the central system. Other software processes the results, and gives real-time statistics / overview of whats happening. The daemon also monitors system activity, and when the system is idle for a predetermined time, it asks for tasks which it can execute in the background. This isn’t rocket science, you know – any competant engineer can do it. Heck, I’ve done it as a junior.