ExtremeTech explains how to build a rendering farm out of old computers. “At its core, a render farm is pretty simple: Seven or so machines on a network, a network-accessible storage location, a rendering app, and a queue manager. Putting it all together should be equally simple, right?”
I will be better than the latest graphics card.
I am already better
It doesn’t add anything of value to anyone who is actually interested in setting up a render farm.
I might have missed something, but I don’t see any explanations about how to set up various programs for farm rendering. What I did see was stuff about dumpster diving and how to set up a network on which you log on with the same user/pass on all computers.
I dunno, but I think this article is slashdot quality, and I’d like to think that OSNews is better than that.
Indeed, nothing really useful here.
I did find it quite amusing though that their suggestion for the easiest way to manage a Windows network is to use OS X Server. That Windows 200 itslef has too many bugs.
Oops, Windows 2000 even. I assume their wasn’t a Windows 200. If there was you’d have thought that they’d have sorted the bugs out by now…
yeah come on, this site has gone hill for a year already.. they post the most pointless biased stuff
As you said, little value. Furthermore, the article is downright wrong in its basic assumption:
“Nine 1-GHz machines chained together will render much faster than three 3-GHz systems”
Anyone who’s actually built a farm can tell you that the render speed is linear. In the above example the 3×3 GHz is to be prefered due to reduced energy costs and less overhead.
Look at the final result they claim:
“A local render on one of the school’s 2-GHz […] in 52 hours. Using a one-worker farm, our time shoots up to 104 hours-unsurprising, considering the age of the machine. But after adding six more workers to the farm, the same render clocks in at just over 13 hours, more than three times as fast as in pre-farm days.”
More than three times? It is exactly 4 times. They had eight 1-gig CPUs compared with one 2-gig CPU (don’t be fooled by the “adding six more workers” line).
Linear. And all about Hz. Never mind the manufacturer, the amount of cache or the generation of the CPU. Hz!
Mvh
Mats Johannesson
To say flat out that Hz is the only thing that matters is, I think at least, a little off tilt. It’s been my experiance that when farming, more machines actualy can make more of a difference than more raw Hz.
There are a number of factors, a big one being faster access time (overall) to hard drive caches, and memory caches.
A lot of it depends on what you’re rendering and how your farm is managing it’s memory situation.
Yes, of course, Hz!= Everything. I “hårddrog” [meaning something like ‘went extreme’] to make a point in reaction to the poor article.
Also, there _are_ differences in CPU architecture. Eg an AMD Athlon measured to a certain Hz rendering alot faster than its Intel Hz counterpart.
Crap articles with lots of money-generating ads (I clicked on the “print” version immediately) brings out the worst in argumentation technique – in me.
Well, this is OSNews so… few care 😉
Mvh
Mats Johannesson
There was a Windows 2.00
That was a long time ago, though, and so bad as not to be worth mentioning.
–JM
Does anyone know of an open source queue manager that could be used with Apache w/modules?
Pretty worthless article, this might help.
http://unu.novajo.ca/simple/archives/000026.html
http://www.lists.apple.com/mailman/listinfo/xgrid-users
I agree that this article is useless in general and specifically for my answer to my idea/thought about using my Linux boxes for iMovie rendering. Infact the links to the replies helped quite a lot and came to this specific post http://lists.apple.com/archives/xgrid-users/2005/May/msg00116.html
and then http://www.cocoadev.com/index.pl?XGridDummyPlugin
Now my question is ‘Are there Automator or AppleScript tools available to render iDVD image or H264 or .mov files out of iMovie project ?’
I should go and do some more research on it …
Meet DrQueue
http://www.drqueue.org/
This is a very bad article. I use Dr Queue to manage a 6 node render farm, it works just fine in Windows. I don’t know which version he used.
String together tons of old computers and ancient hardware and the electric company nearest to you will be clicking their heels together with joy and sliding down magic rainbows into pots of gold.
In the article, the author repeatedly demostrates he knows nothing about building farms by telling us the graphics card in the machine (e.g. “A hardcore graphics-head recently told me that after throwing a $700 graphics card into his 3-GHz PC”, etc.).
95% of the time, you don’t want a graphics card in a farm machine. The only time you would is if you want to batch hardware playblasts out of Maya, which you’d do on your desktop anyway.
Enterprise-class farms are there to get work completely done overnight or several tests during the day, which usually takes hundreds of machines. IMO, a small farm is mostly there to just let you keep working on your main machine is busy. And when you start distributing even to a small farm of several machines, there are additional concerns like keeping network traffic under control through caching and optimization through frame-range batching.
For most home or very small enterprises, approaching this in the way of the article is overkill. Just use the built-in satellite rendering that most 3D packages come with today and you’ll probably be pretty happy.
“In the article, the author repeatedly demostrates he knows nothing about building farms by telling us the graphics card in the machine (e.g. “A hardcore graphics-head recently told me that after throwing a $700 graphics card into his 3-GHz PC”, etc.).
95% of the time, you don’t want a graphics card in a farm machine. The only time you would is if you want to batch hardware playblasts out of Maya, which you’d do on your desktop anyway.”
Uh, Chef, the point was that this was a $700 card in a desktop workstation.
The point was that for much less than a $700 card you can cobble together something that will SLAY it if you build a render farm.
—
But I will agree that the article was weak overall. I wanted more of a “how to” and nitty gritty details. If a person knew nothing about computers and render farms,or if this was the introductory part of a series on building a render farm, then yes, it’s a good article but as is, it tells a tech savvy reader very little that they already haven’t figured out for themselves.
Funny article, especially those parts about W2K and PDC – did he ever heard of AD? Since W2k, there’s no “PDC” – there are “DC’s” and one of them is “PDC emulator” – actually that part tells a lot about the author and his “knowlege”. Also, it would be nice to calculate cost of electricity for that farm.
I’m interested in building a render farm just because I have some nice hardware to do it with. I would love to see what it can do, perhaps compiling.
anyway, this article is no help, looks like I will have to look elsewhere.
nothing i hate more is when people have a bunch of computers and they use the word ‘farm’ to repesent a network of computers
moooo !!
Next they’ll be calling indivdual clusters of nodes ‘cattle’.
The computing paradigm is going backwards!
i prefer sheep over cattle…..
This article has gotten me thinking about setting up my own farm, as it turns out, electric is part of my monthly rent and I’m sure I could make a few hundred bucks by simply leasing it out to some of the VTD students.
I am also interested in building a renderfarm, the article is nice in that gives you an idea of what category the pieces of software you need. One thing that I am concerned about is electricity cost, in the article they implemented the “farm” at some school dept, which is hardly the home type installation.
for under £1000 , i can roughly build a 15-20 node cluster using old hardware but one of things that stopping me even going down that route is the cost of electricity. … i have no way of knowing how this will impact my bill ( any ideas anyone – i live in the UK ) …
it would have been nice if the article could have shown the benefits vs cost so we can know how far we can take our hardware before its more cost efficient ( and efficient ) to use a commercial solution…
Well, UK power is somewhere around £.15 per unit (http://www.uswitch.com/energy/Popups/pop_SwitchDetails.asp?popup=tr…), so I guess it would just be:
Power (150W?) * Hours (24?) * Days (30? 90?) * Price (0.15?) * Machines (15?)
—-
1000
Making (150*24*90*0.15*15) / 1000 = £729 per quarter for 15 machines on all the time.
Unless I’ve done something stupid of course, which is always likely.
No offense, but that screencap is cartoonish at best, and nowhere near as good as what Far Cry manages in realtime. If it’s taking him 52 hours to RENDER 800 frames of THAT – He needs to get something better than a Pentium 75 and Pov-Ray.
further proof that he has no idea what he’s talking about: there is no difference between softimage and maya. both cost $2000 for a version that includes mental ray’s standalone rendering engine. the $500 version ($700 for linux) of softimage can only render inside XSI.
the rest of the difference between soft and maya is personal preference for modeling tools.
you would think for an article such as this, that he would have actually set up and used the “cheapo” configuration he suggested.