The purpose of this document can be informally stated as follows: if you were to use virtualization in a an endeavor (research or otherwise), here are some things to look at.
The purpose of this document can be informally stated as follows: if you were to use virtualization in a an endeavor (research or otherwise), here are some things to look at.
Article touches on many subjetcs, great read, but skimmed the last parts.
Now he touches on HW, since he mentions SystemC (An Occam like addon to C++ for modelling HW & SW systems), I will go further. What if the cpu is instanced on an FPGA. Now the cpu is really virtual. If the cpu was one you were familiar with there would probably be lawsuits and it would run atleast 5x slower than original. But if the cpu was an original design for say my version of PCode or Java byte codes or better still Occam it would run as well as a low budget ASIC say about 250MHz (about 250Mips) and might only cost a few $ per instance depending on features. Now it becomes possible to build the cluster you want within a larger FPGA upto about 50 nodes assuming the heat can be handled.
Now what if this cpu allowed virtualization too. Gets confusing pretty quick. What if this cpu was highly replicated and ran millions of lite processes that were perhaps running a HW simulation (or is it an emulation) or a super compute problem.
How would you observe so many processes, Taskmanager or ProcessManager wouldn’t do. A process viewer to watch so many hierarchical threads would infact start to look alot like a VLSI layout edit view of a chip, but the magnified process view would reveal the current state of each process rather than logic gates. In the SW Occam view of par process though, P can startup and terminate families so now the view is of a chip that is changing its shape. Reconfigurable computing.
Just some thoughts which follow from my work.
JJ, interesting post. Do you have a link to some of your work or a list of publications?
If you google for FPGA and Transputer you will find half of it comes back to me for recent rants. Transputing (massively par computing the right way IMNSO) kind of died 10yrs ago only because 1 company couldn’t get the next version done (Inmos/ST). Then FPGAs came along and can do alot of the same thing but they require that they be designed by HW guys to do what used to be done by SW guys (with a HW bent). Thats the wrong way around since there are 1000x more SW guys than FPGA HW guys.
In my work I try to show that for a large class of problems esp super compute grid problems that FPGAs and Transputing are essentially similar and complimentary in that they both rely on masses of cells to compute local computations. FPGAs work at a very low level, Transputer work at close to the C level. Either way C like assignments can be written in any of the Occam like languages and run as par code or can be synthesized into HW and run as look up tables.
googling for FPGAs, and reconfigurable computing can lead to many interesting paths. Transputer, Occam references are becoming dated but are now getting reinvented by the new kids who never heard of it 1st time around.
ofcourse I am working on such a cpu & compiler but there is much much work to do yet’
Regards
JJ
A bit short on details, of course, but that’s to be expected.
Seriously, could we start getting story summaries that actually contain something?
“The purpose of this document can be informally stated as follows: if you were to use virtualization in a an endeavor (research or otherwise), here are some things to look at.”
is completely vague and meaningless.