Say you’ve got a big computation task to perform. Perhaps you have found a way to cure cancer, or you want to look for aliens. All you need is a few super computers to work out some calculations, but you’ve only got the one PC on your desk. What to do? A popular solution is the “distributed computing” model, where the task is split up into smaller chunks and performed by the many computers owned by the general public. This guide over at Kuro5hin shows you how.
what if you need to transfer a large amount of data between processing nodes?
cluster is the next cheapest thing
A while ago i used the pvm3 lib on linux for an universty assignment which was to do parallel processing on images (apply filters and all that)bla bla bal…, the thing is, that i came to the conclusion that for it to usefull you need a hi-speed network just for it, otherwise if you´re sharing the network with other users, it’s probably faster not to distibute.
http://www.netlib.org/pvm3/
For massive-scale grid computing, you need to be able to distribute your data easily.
This seems pretty common sense to me.
So, if each node needs 100MB of data, try a different algorithm or rent a compute farm that runs on a high-speed LAN.
#m
Interesting article.
DC is fine for the poorer super computer problems that can reduce problems to big compute small communicate packets & can sign up a significant no of power PC users.
Problem is the corporate types will forbid DC on their PCs & the general population will probably lose interest. As I run the Stanford protein folding screen saver,I am already bored with it. I would rather they let another screensaver share cycles with it. As the article says, the screen saver gets only a fraction of the idle cycles, as I sit here the pc is 96% idle but the screensaver only comes on rarely.
Anyways, any google search for any of the DC problems + FPGA will show that those more serious about their work will build the FPGA engines they need to get the work done more quickly.
The drug companies have the problems & the money to do this big time. The Seti, RC5, DES, Mersenne Prime No searches are passing fads IMHO although doing much better than I originally thought.
The Protein folding engines I have read about are about 500x faster than general PCs & that will only increase as the current crop of new FPGAs (Xilinx Virtex Pro) are on a much faster effective compute growth rate than even x86. Where as PCs may really double speed every 2 or 3 yrs, FPGAs get bigger, faster & more usefull all at the same time so compute growth may be >>2x every yr. This will last for a few more yrs too.
Now if you are really serious about your compute problem (like say the gov working on DES crunching), you can also proto as an FPGA then turn it into an explosive ASIC device for yet another order or 2 increase.
Now it benefits SW developers who have such a problem to learn HW design & Verilog/VHDL in particular to learn how to map problems into massively parallel HW engines. I wouldn’t mind working on such a problem myself. The DES, mersenne problems are relatively trivial to turn into HW, the folding problems are much more interesting and more difficult I suspect. As I have said before since FPGAs are my favourite subject, speed ups can be anywhere from <1 to 1M depending on how well the problem fits FPGA v SW.