Graphics hardware has grown from a simple display controller to a full fledged specialized processing unit, in the last 10 years. The amount of included memory has increased from a mere few Kilobytes to hundreds of Megabytes. But what other uses can be found for your videocard when you’re not playing a game or using the latest CAD package?
this is very promising, especially using the video RAM as swap can be used right away. but what happens to that swap when you want to play a game?
still this is an exiting development. if beos was still alive and kicking, it would have made quick use of it because using multiple cpu’s was really build-in, not only in the kernel but in the api too.
I like the memory swap idea. However i liken it to the idea of a victim cache, such as is the L2 cache on AMD Athlon CPUs. That is, when pages are evicted from main memory, they go to the victim cache, and if their evicted from the victim cache, they go back to disk.
I think this is a pretty good match, because victim caches dont need to be very large, which is great considering everyone hasnt got 512 or 256MB video cards.
Its great to see some people actualy working on things like this. I’ve been frustrated that my card (expensive card) sits around most of the day doing nothing (except for chewing energy).
I guess that the swap would be dumped directly onto the HD, or the amount of RAM available to your games would be significantly reduced
My amd64 box has 1024 MB DDR PC2700 so i doubt the swap is ever used.
Go give a hug to your father, who gave you that little toy for you to play..
OR
Stick to the article..
Stick to the article..
Hmm,well i sincerely thought i did just that.However when regarding security it would be nice if nothing actually writes to /dev/kmem but your mileage may vary.
Like the author said, you probably wouldn’t buy the a video card to use it for the extra memory. On the other hand, if you aren’t using that video memory for anything else why not use it for swap space? (I know that I don’t need 128 MB of video RAM, but have it because the card has it.)
The real point was to use it as a secondary CPU. I could imagine this sort of setup being very valuable in many environments. How about less expensive machine for modelling colliding galaxies:
http://www.sdsc.edu/GatherScatter/GSfall95/4imax.html
Tree codes easily allow for the division of processing across machines with minimal communications between processors. Or maybe doing a bit of climate modelling. Using cheap GPUs and PCI instead of expensive high speed network interfaces would probably get you pretty far.
I just want to ask how you can make the GPU process your data?
Do you have to write your data through pointers to the video RAM and the GPU automatically processes this data?
Or which tweak is used to use the GPU?
I am thankful for every answer!
AFAIR, computation is not done in good old C or assembly, it is done using shading language (GLSL or Cg)
Looks like most papers on GPGPU use nvidia cards, so i suppose they use Cg there.
You can find sample GPGPU code on nvidia dev site :
http://download.developer.nvidia.com/developer/SDK/Individual_Sampl…
Well like the article said, there has been C libraries ported to use GPUs. It would be very interesting to get some of these libraries and play around with them.
good article and looks like an interesting idea, I just wish the author would have gone into a bit more detail on how it is implimented
I bet linux and maybe BSD would have some of these techniques implimented before anybody else, just because other proprietary os’s and hardware companies would have a conflict of interest
Imagine what intel would think if nVidia was selling a $250 graphics card that would in many instances outperform intel’s best offerings.
this would hold many people back from building a dual cpu workstation.
Keep in mind….. this is only when and if the technology to utulize the GPU as a second cpu is perfected. I think it would be cool.
so what’s the feasabilty like as far as off running the entire OS off a video card?
considering that for like 50 bucks you can get a 350 mhz processor (although its a gpu), 128 mb of ram and a tv out, it seems like cheap method of the appliance media PC, particularly the tv-out thing, cause most cheap ass mobos don’t s-video or composite (actually i don’t know ANY that have integrated tv-out hence requiring a video card anyway to go in the living room). yeah it probably wouldn’t handle so well doing things surfing the net but i imagine it could handle video and mp3s well enough, if you could make a live cd that would recognize and run solely of a gpu.
Well as good idea as I think that is, it’s also incredibly infeasible in real life, and your last statement gives it away.
‘a live cd that would recognize and run solely of a gpu.’
Where are you going to plug in your cdrom? For that you need a disk controller, and for that you’d need either one integrated into a motherboard, or a PCI controller which would ALSO necessitate a motherboard. And as long as you need a motherboard you’re going to need a cpu and main memory.
So unless you could build some custom hardware to interface directly between disk/network and your GPU you’d effectively be recreating everything you already need for a working computer anyways.
It isn’t possible to run the whole system off the graphics card. Video cards don’t have general purpose CPUs (or they have really low powered ones). They also don’t have all the other stuff (like all the other IO devices) for running a system.
Graphics cards have lots of power in the GPUs. But the GPUs are not general purpose processors; they are specialized for graphics. What people have been doing is using the GPUs for doing certain calculations by writing them in the graphics language.
well IO is a problem but, if they can get linux/netbsd on toasters,videogames systems, cell processors and any/everything else under the sun they can damn well hack a version that runs on gpu style processor
No they cannot make Linux or any other OS even a far simpler one run on a current or even next-gen GPU. As another poster said there’s a big difference between general purpose CPUs such as the x86 or PPC and GPUs.
The ‘toasters’ you mention all have a general purpose CPU inside of them. As do all the game consoles. Cell is also a general purpose CPU (although very different in design from most conventional processors).
GPUs are ridiculously fast for certain operations when compared to a general purpose CPU but at the same time there are many things they are just not capable of. At all.
There is no way to make them do all the stuff that general purpose CPUs can do.
“GPUs are ridiculously fast for certain operations when compared to a general purpose CPU but at the same time there are many things they are just not capable of. At all.
There is no way to make them do all the stuff that general purpose CPUs can do.”
I agree, but think how are mainly used the home computers: write down a letter, render webpages, support an extra heavy “luser friendly” interface full of bells and whistles, encode mp3 and divx and run extra heavy last generation 3d games.
Typical tasks are heavily biased towards multimedia application so IMHO the home computer should be rethinked to be a GPU-centrical machine supported by a secondary, general purpouse CPU, not like today where PC are assembled around a 110W, n-cores, n-zillionHz CPU.
That’s ridiculously fast for writing down a letter AND ridicusly slow and inefficient for multimedia tasks, so we need anyway a GPU that will be uderused for the most of the time and that will be remain undersupported since developing kit and languages are to strongly oriented to be specialistical tools for game developers, in the sense that there is no a Visual Stuido or a Lazarus or XCode that will simply compile an application to use the brutal GPU power to resolve parallel matrix of calculos (excepting if you are explicitly “doing graphic”), browse databases, read-write files, run crypto or compression algorithms etc.
When a good RAD and copiler will be out for a GPU centric machine with a complementar CPU we will finally havwe an energy efficient and cost effective supercomputer with today (or yesterday) hardware.
“When a good RAD and copiler will be out for a GPU centric machine with a complementar CPU we will finally havwe an energy efficient and cost effective supercomputer with today (or yesterday) hardware.”
By the way, it’s what many vendors are about to do with consolles, but obviously and sadly, they are doing it from their “point of view”: customized OSes ad SDKs (with developing tools strongly game-oriented), zero inetroperability, DRM.
So, a great opportunity lost of having decent general purpouse developing platform on a GPU-centric machine…
Using your video card for swapping is nothing new: http://hedera.linuxnews.pl/_news/2002/09/03/_long/1445.html for example explains how it works.
I tried that out back then with a TNT2 and it worked (used another howto, though). But I couln’t notice any speed differences when I mounted the graphics RAM as a regular file system compared to my harddrives.
But on the other hand, I did not test that extensively.
check out this page. http://www.bionicfx.com/ they are making an audio effects engine.
I would think that you could access the GPU via the driver interface. Pass what you want to computer or do by taking to an advanced driver.
Is there something like that (driver to use the video card’s spare RAM as a virtual disk) available for Windows XP? That would make a little cool project.