Today’s microprocessors will become extinct by the end of the decade, to be replaced by computers built onto a single chip, Sun Microsystems Chief Technology Officer Greg Papadopoulos said Tuesday.
Today’s microprocessors will become extinct by the end of the decade, to be replaced by computers built onto a single chip, Sun Microsystems Chief Technology Officer Greg Papadopoulos said Tuesday.
Wasn’t it also Sun who proclaimed the end of PC era and the beginning of the post-PC era a couple of years ago? Wasn’t all functionality currently offered by PCs suppose to be spread between other devices such as cell phones and internet enabled toasters and the PCs as such would cease to exist?
Computers as we know it today will change at an extremely rapid rate, but if he thinks it will be replaced by computers built onto a single chip he is not thinking straight. Perhaps I don’t understand what this vague “microsystem” is, but if it is merging everything consider the complexity of each component, and then consider merging it all. Does he think any company or consumer would ever want that to happen?
When you get to the point where you can fit a 100 million transistor GPU and a 100 million transistor CPU into a single chip, people will just build 200 million transistor GPUs and 200 million transistor CPUs. We’re not going to see seperate CPUs and GPUs go away until GPUs become fast enough, and that won’t happen for decades. At that point things will consolidate (notice how the system “chipset” consolidated first into North + South bridge chips, then into just a single chip, and now is starting to move onto the CPU).
isnt that what we have with built-in components such as video/sound/network card onboard of the mother board?
today processors are millions of univac computers smushed onto a chip.
i will believe it when i see it…
Predicting the future works best when you get to invent it. Sun is laying out their vision of where things are going, single chip computer systems. While the increase in the number of transistors will increase to keep up with Moore’s Law (and be limited by fabrication realities) designers will keep their options open.
Intel’s Itanium is designed to expand to take advantage of all available space on the cpu chip and offer instruction level parellelism.
The transister budgets of the chip designers will have to increase beyond the need for cpu’s to be on a single chip.
While some chips will be one chip systems, it may not make sense for the generic person computers. Where it makes sense is for laptop computers and other portable devices. To save energy and lower the cost.
But then, maybe he knows better than me, since he’s the chip designer guy, and after all he’s the one inventing the future of chips at Sun! ;–)
Please ignore this
>The transister budgets of the chip designers will have to
>increase beyond the need for cpu’s to be on a single chip.
and replace it with this:
For the one chip vision to become reality the transister budgets available to chip designers will have to increase much faster than what CPUs hunger for transisters.
Also, simple embedded systesm and bottom end pcs are a must for this, as in many applications they require less cpu power.
Of course I said the same thing 4 years ago with a 10-year timetable.
The future they were talking about isn’t very long
http://www.wired.com/news/technology/0,1282,60791,00.html
in short 25 billion floating point instructions per second.
a PIII 750 is 1400 if that gives you any idea. someone should write a story on this predicting sun will be dead by the end of the decade.
What to heck does Sun know these days? Their credibility went out the window a long time ago.
Back in the jurassic ages of computing, when I was proudly squeezing every byte of RAM from my MS-DOS 3.30’s 640K limit, some idiots actually said the same prediction.
Milennium later, same idiots, same prediction.
The predictions made in the computer industry are almost off by a few decades. e.g. 1950-ish, Simon predicted that computer would be able to beat human chess players in 10 yrs… well it’s 2003, and Big Blue is barely on par.
All the mobo chipset makers are surely heading into that direction, especially nVidia, but talking about a full integrated chip, it’s some hefty investment. In that sense, the integrated chip could only be successful if it is relatively cheap, assuming it’s doable. It’s a chip that you can’t upgrade or replace if any component breaks down (actually the whole chip would most likely die if one part dies…). The only prospect market that I would target to is the business sector, espectially corporations that buy hundreds of workstations at a time, but don’t require them to be super powerful. Another possibility is that it would actually enable the vision of automated house since the chips can be put into household appliances, of course this is assuming security are done right. That’s just my 2c.
Something still has to execute instructions, if their are multiple computers inside of a computer that does not diminish the fact that something has to be executing code.
I think it makes sense to offload CPU power to devices for instance, but I don’t see any sense in dedicated machines running specific execution contexts. The recently announced “2n core” designs that can handle multiple threads simultaneously seem to be the next “wave” in CPU designs. Makes sense. Even then, multiple CPUs on a motherboard are still inevitable.
However, I don’t see there being 4 mini computers in one case acting as a “micronetwork”. Maybe he assumes that our watch will handle our GUI, while our belt buckle will handle disk access, our pants will have a kernel that has access to a cpu sewn into our undergarments, and our shoes will handle task scheduling between these devices, and they will all communicate through 802.z , maybe that is what a “micornetwork” is?
The future they were talking about isn’t very long
http://www.wired.com/news/technology/0,1282,60791,00.html
in short 25 billion floating point instructions per second.
a PIII 750 is 1400 if that gives you any idea. someone should write a story on this predicting sun will be dead by the end of the decade.
How is this any different to a person creating their own workstation using off the shelf Xeon processors and placing it in a NUMA equipped motherboard?
What SUN is offering is something that will work, unlike the Xeon and the performance reducing hyperthreading. You may scream “Intel” but some people here would like to purchase a machine, with a feature that actually INCREASES performance NOT DECREASE.
Um, the 25 gigaflops chip is not a general purpose processor. Its is a specialized vector processor with 64 FPUs. Its not good for running general purpose code, because most general purpose code can barely use the 3 FPUs in a modern Athlon. They simply cannot be executed in parallel like that.
Um, the 25 gigaflops chip is not a general purpose processor. Its is a specialized vector processor with 64 FPUs. Its not good for running general purpose code, because most general purpose code can barely use the 3 FPUs in a modern Athlon. They simply cannot be executed in parallel like that.
If that is the case, then why not make a board using MAJC, a VLSI based processor from SUN, which is a fp monster and used in their ultra, super-duper high end graphics cards.
http://www.sun.com/processors/whitepapers/majcintro.html
Quite interesting information. A while back this was going to be the successor to UltraSPARC but ultimately, they realised that people aren’t going to move if it requires too much pain. Fortunately for SUN, Intel hasn’t learnt and will continue to flog the dead horse.
I remember that when sun announced java, they predicted the end of desktops as we knew them and hard disks and all that.. I think microsoft believed this too much and tried to go the same way with .net.. and nothing happened yet.
There is no “application rental” and all that that they talked about.
Maybe Sun will get out of the IT industry and take a dive into astrology. My bet is Papadopoulos is positioning himself as the next Chief Psychic.
More seriously, do these IT executives really need to say these kind of things to attract the media ?
While the mainstream press has always focused on Moore’s Law and the ever-shrinking feature size, the real issue driving changes in processor architecture has always been the relative speeds of accessing registers versus accessing main memory.
CISC processors traded CPU cycles for fewer instruction fetches, by using tightly packed instructions. As CPU cycle times improved more than RAM access times, better schemes were needed. RISC gave up the complex instruction decoding, and used those transistors for a larger number of registers and larger caches. While more RISC instructions needed to be fetched than CISC instructions, if a reasonable percentage of them were in the cache, the overall throughput was the same. And more registers meant many loads and stores could be eliminated, reducing the data traffic.
The difference between CPU cycle time and RAM access time is greater than ever. If most of the resources that a thread needs can be brought onto the chip, performance can improve substantially. Limit the interchip transfers to I/O and interthread communication, and the speed penalty won’t hurt has much.
Perhaps an analogy is in order. Computers have a hierarchy of memory systems, with each level trading size and speed. The CPU registers are the fastest, but can store very little. RAM is slower, but much larger. Larger still, and slower still, are hard drives. All three levels have gotten faster over the years, but the difference in speed between levels has gotten greater, if anything. Virtual memory allowed operating systems to lie to applications, substituting disk space for RAM, at the cost of speed. A decent trade-off, since it is better to run a program slowly than not at all. But we all know that if your systems does much paging, the best improvement is more RAM.
Intel has focused on faster CPUs. They are at the point where further CPU speed increases buy very little performance increase, because RAM access is now the bottleneck. Sun is moving towards putting the RAM on the CPU. The CPU may not be as fast, but may get more real work done because it’s not waiting for the RAM. This makes sense to me, although I’m less confident that Sun will be the ones to benefit from this architecture shift. But they may.
We already see this trend in PCs. No current high performance systems have the main CPU turning the bits on and off in the display buffer. The CPU sends higher level descriptions to the GPU, which has a faster channel to the display memory. While that is a split between general purpose CPU and special purpose GPU, Sun is proposing that the same benefits apply to multiple general purpose CPUs. They have a good history in designing multiprocessor systems. I think that they know what they are talking about.
I remember that when sun announced java, they predicted the end of desktops as we knew them and hard disks and all that.. I think microsoft believed this too much and tried to go the same way with .net.. and nothing happened yet.
There is no “application rental” and all that that they talked about.
Nice to see another PC fanboy take something completely out of context just like when Scott said, “you have no privacy, GET OVER IT!”, then rumours went around claiming that he is pro-1984 society and social engineering.
The claim that the desktop is dead is based on the corporate desktop. The claim was based also on the fact that it is cheaper and more efficient to have a large big iron with thousands of users accessing it via thin clients than having thousands of pcs around an organisation with the CPU being idle 95% of the time.
The best anology is this. Lets say I am a truck company. If I have 100 trucks but I only need 25 trucks and 5 back ups, isn’t it awfully inefficient owning the other 70 trucks that do nothing?
With centralised processing, whether there is 100 or 1000, the machine is always being utilised by someone and something, thus capital is always being used.
As for reliability, the average SUN Starfire is so juiced up in redundancy, it has every thing one see on a mainframe, and how often do they go down a year?
SUN has proven, via their OWN experience, than running a large network on centralised processing is cheaper than having hundreds of PC’s sprawled around an organisation being under-utilised.
Actually I can almost see a point here.
Let’s say if you have a system where there’s essentially a small cpu embedded into a hardrive drive and connected directly to the address lines (They have some processing power already, but I’m thinking bigger). This cpu would basically act in exactly the same way as a fileserver complete with a tiny OS that knows all about various filesystems. That would take quite a bit of strain off the main cpu because it could just request files instead of having to handle all the details of drive management itself. So I guess it would be kinda like having a multiprocessor system but with certain processors tasked only to manipulate a specific piece of hardware.
Perhaps they mean something similar.
At the level you are speaking of the system handles everything in I/O “blocks”. They already have drives and system controllers with processors that off-load processing from the “main” CPU and cache I/O requests.
Filesystems are usually a little higher up in the software stack, probably too high to be helpful at the block I/O level.
If you were to redesign the stack though, so that the filesystem existed at the device layer, then the system would always act on objects (objects could directly reference files) for data requests or requests across the network etc, than you could cache objects and their could be an object hierarchy. I think Intel was working with others on actually creating an I/O system that directly accessed objects, and objects could be files, data chunks, etc built on iSCSI.
Of course, some OSs specifically turn all buffering/caching into I/O blocks for caching (including files.) This helps with the VM as well because memory pages written to disk can be picked out of the buffer cache.
http://www.supercomputingonline.com/article.php?sid=4724
“…Throughput Computing is core to Sun’s strategy for the future of Network Computing and its SPARC(r) and Solaris(tm) systems. At the heart of this new strategy is Chip Multithreading (CMT), a design concept that allows the processor to execute tens of threads simultaneously, thus enabling tremendous increases in application throughput. Sun is bringing CMT to the market in a phased approach. First generation CMT processors, such as the UltraSPARC IV family of processors, can enhance current UltraSPARC III system throughput, initially by up to 2 times, and later by up to 3 to 4 times the current levels. In the future, Sun will be rolling out a more radical CMT design, which will first appear in Sun’s blade platform in 2006, that can increase the throughput of today’s UltraSPARC IIIi systems by up to 15 times.”
The new systems will be on a single chip and of course run Solaris…hahaahahahha!
Get a life man, the PC as we know it has still a long way to go.
Weeeeeee
Well atleast 2 people here have the right idea.
But I have news for you all, the idea of a single chip cpu with everything needed on board with memory interfaces and links to N neighbour cpus and with thoroughly multithreaded kernal in HW was done 20yrs ago by Inmos, I was there. It was called a Transputer, but it died for reasons too numerous to mention but mainly too far ahead of it’s time. It really had to be programmed in Occam to get max benefit of all the threads. In those days DRAM was just starting to look slow compared to cpu, and threading was not a hack to cover up latencies.
Now what I see at Sun, Intel, even Alpha is threads a plenty but no model for those threads to communicate in a fine grained manner. Sort of half baked. If you have a scheduler in HW and you can switch threads in basically 0 to a few cycles, this is what everyone is pushing for to cover up the increasing 100+ cycle latencies of cache misses etc. Better still if you have a Process model where all SW is written as a hierarchy of Communicating Sequential Processes (CSP Hoare) you can much more easily tell the compiler exactly where the parallelism is block by block. Alot of SW then starts too look alot more like HW with soft wires (Channels) connecting concurrent processes (with rendezvous syncronizing) but with a hierarchy that can be much more dynamic than static HW allows.
Example a compiler usually built as a couple of sequential passes ie preprocessor, lexer, parser, emitter etc can use something like the unix pipe scheme to move data from 1 pass to the next but that is fairly crude. Using HW enabled thread switching and message passing through channel variables would really get the most benefit from cooperating threads rather than multiple unrelated threads fighting for cache space.
Further once you encourage highly threaded apps, its a snap to move processes onto adjacent cpus using the links. In the Transputer, process communication between threads was transparent, same inside 1 cpu as across many cpus only the message latency gets longer. To decide what processes ran on what cpu required some partitioning & balancing but that could also be automatic.
The only thing Transputers didn’t have is all the fancy features we demand today but that was nearly 20yrs ago. Integrating even good graphics plus all the necessary newer IOs onto a cpu is more than easy to do, it is more of a political problem. Even Intel doesn’t the control the PC std despite their annual HW workshop. But it will happen, most likely Via pushing the bottom end up while Intel holds up the lucrative end of.
As anyone who ever used BeOS knows what the future holds, more threading for smoother operation v useing raw hot cpu speed to fake it.
Whether Sun has a clue about this I can’t tell even though they do know how to build large big $ clusters. Funny thing is that single chip PC will wipe out any profit for almost everyone and could take PCs way below $100 (Timex anyone)but what about the cost of the OS. As for repairs & failures, ofcourse the tiny mobo will be disposeable.
Of course that would require you to throw away your mobo if you need an upgrade, not a big deal if it is in the sub $100 range.
I wonder how they are going to deal with cache contention on systems that deal with multiple CPUs AND threads on those CPUs? Further, how are they going to deal with cache coherency, are they going to consider how busy a CPU is, will there be temporary thread assignments to a CPU, how will threads switch to another CPU, how will it deal with internal register assignments, is it all handled by a software scheduler, etc. Maybe it will just be mindless co-operative multitasking between cores with little concern about cache efficiency. They may of course plan on the chips having 512 MB of L2 cache, then I guess it wouldn’t matter at all. lol More than likely.
In the future, things will be vastly different in some cases, yet similar to the present in others.
In the CSP model, all processes mind their own xxx business so to speak. Data only exists as local data to a process, if another process needs some of that data, then it must be sent as a message to it. This schema is much easier to implement and closely reflects how HW works ie signals are mostly local but some are buffered up and sent across chip with alot of wire delay latency. In the message passing scheme, N cpus cost N x price of 1 but you have to accept the message passing penalty which isn’t neccesarily bad and is hidden by the threading. But this is the way to low cost.
The industry though is mostly focussed on the other model which forces great complexity in keeping caches coherent, and does not encourage programming in the massively parallel whether on 1 cpu or many. This is why multiple cpus always cost so much. I have always considered this model to be like the goto code flow model of very badly written programs. In structured programs there are far fewer odd gotos, and CSP languages don’t have data going anywhere unless needed and since processes are mostly sending messages between selves on same chip, the IO bottleneck can be manageable.
I am not sure if Sun understands this or is just pushing more of the same complexity as before.
I have actually wondered about this, so you are basically saying that data requests are handled via a messaging protocol on the hardware level. There is cost associated, but ideally in comparison with execution time, only a small percentage of time would be spent waiting for external data. But what about process migration? Would this be handled by a software scheduler, or would this be abstracted from software and the OS could handle multiple code segments in flight?
…OUR microprocessors will become extinct within the next decade.
He can’t say that in public, but IMO the future for SPARC is rather bleak.
In the message passing schemes, HW must handle the passing of data without seriously stealing too many cycles from available threads that can still run.
In the Transputer case, channel vars are a bit like a ptr var to some type. In C on reg cpu you would have to call a memmov opcode or routine to do this and cpu would sit on that and move data till done. Thread continues on when thats done but usually no syncronization is included.
In the Transputer model the sender & receiever get descheduled when arriving at a rendezvous. The moment the 2nd of those arrives, the HW moves data between the 2 processes using whatever HW is available for that. When completed the 2nd process could resume and the 1st process will be rescheduled to run asap.
example broken code fragment in C-occam for a compiler
par {
chan some_type preprop2lexer[], lexer2parser[]; // etc
// all nested {} run in parallel but communicate through chan vars, maybe on 1 or N cpus.
preproc: {
// do usual file read
// do lots of stuff
preprop2lexer! preprocResults[]; // ie write result to channel
}
lexer: {
preprop2lexer? lexer_in[]; // ie get preproc data
// do lexing
lexer2parser! lex_out[]; // ie put result to parser
}
parser:{
lexer2parser? parser_in[]; // ie read in lex data
// do parsing
parser2emmitter! parser_out[]; // put result to emitter
}
// & so on
}
! is like := assign but with possibility of being descheduled.
? is the reverse same, (Occam insisted to recieve from lhs)
channels are like soft wires which might pass over an IO link.
any of nested {} could be nested par{} or seq{} blocks.
Occam did not use any C like syntax, no {}, no // etc no wonder it was ~popular.
Processes should be arranged such that big blocks of computation are isolated with the min of message data between them.
IIRC processes had to be marked with a cpu id to run on, & the boot loader would put Ps in right cpu. This is very much like planning a pcb with chips with limited pins. I don’t know much about any OS APIs of that era or if Ps could move around at will.
After the demise, everything moved over to Sharcs, TIs, Alphas etc where some of the HW also supported links but not much else. The missing parts, scheduler, message passing went back into SW API but ran 10x slower than HW at same clock. Antiprogress. KROC is the Occam that runs on PCs but its still Occam language! HandelC & SystemC are two recent CSP C langs but they are really meant for SW folks doing HW.
Hidden behind seq is zippo. But par has a few opcodes to construct N more threads or Ps to add to list. := amounts to usual mov. ! & ? are opcodes to mov but also syncronize. Penalties for these resp about 80, 1,20,20 cycles or so, compare to Windows maybe 100s or more faster in cycles.
AFAIK all the hyperthreaded cpus still require a SW kernal which usually mean the main OS so no chance for really fast P context switch in the order of <<100 cycles. In my work I am aiming for 0-5 cycles for most of these penalties!
regards
We’ll never reach a point where we have “enough” processing power to do some arbitrarily complex computation (such as simulations of physical processes that achieve useful approximations), but we will reach a point where it makes sense to do this sort of system-on-a-chip design for most markets. You’re already seeing it, and this trend will only continue.
If one could just tie in a bunch of seperate cores more easily (this stuff seems mired in IP issues, but hopefully this will be addressed; one promising developement is applying the open source model to IC developement, which FPGA’s are making possible), you can also improve time to market, which has traditionally been one of the most painful criticisms of the SOC philosophy.
Chip designers are going to have billion+ transistor quotas; that’s just over-kill for some markets. May as well put much of the system as you can manage on a single chip. This will also create new markets.
…
your household probably has about 200+ microprocessors / microcontrollers and the number increases every christmas.
be more specific sun.
you wont see any GOOD SoC implementation without good batteries first–or without reconfigurable computing first.
“In the future, Sun will be rolling out a more radical CMT design, which will first appear in Sun’s blade platform in 2006, that can increase the throughput of today’s UltraSPARC IIIi systems by up to 15 times.”
Great in 2006 SUN will have a processor as fast as modern day Intel and AMD cpu’s.
I always like these bold statements. The ones i like the best were at a Microsoft convention showcasing what was then NT 5.0 they stated that UNIX was dead LOL. SUN still thinks they have an influential voice in the industry they are sorely mistaken. Its obvious they have no clue about what the indusrty is doing or even where it is at now. SUN is going to go the way of the dinosaur.
I think we have more than enough processing for what 99% of the market could want right now. 1GHz cpus are good enough for daily chores and we get internet TV (better than I could have expected) on 1 of ours from Taiwan thanks to cable using only 120K bandwidth. Full size mobos will still be around for awhile but 2nd & 3rd PCs don’t need all that.
These 1 chip PCs will sell a billion when they get done but probably in the Indian & Chinese markets where $100 may still be high. As for business use in the west, we will see.
The open source model doesn’t seem to be working too well in HW IP for FPGAs and certainly not for ASICs. Ie opencores.org is way out on the fringes even if the intent is good. Freedomcpu.org has been trying to build a free 32bit risc cpu, but it is too little and far too late. Some good cores are available, at best if you don’t have a core you can learn from a free one but in the end most will either license a commercial design or spin their own.
Anyway both Xilinx & Altera are trying to build the next gen of locked in HW architectures with very low cost cpu cores ($1.5 MicroBlaze 32b risc) and associated gnu tools. Its like Intel & Motorola all over again. The MicroBlaze probably compares with the last of the 040s in performance so an old style Mac could easily be built for a few $ of FPGA fabric plus a rom even in 1 FPGA. The only other ICs needed are physical level trancievers for VGA, USB & FW etc.
Also time to market may also be a strength of FPGAs with no fab costs or 3month fab delay but as designs get bigger approaching 1M equiv gates, guess what happens. The EEs are back on the same old treadmill and FPGA design flows starts to look exactly same as an ASIC design flow with a few more compromises. FPGAs also allow one to stay in perpetual R/D mode, as your design grows, jump to a bigger part, they get bigger at a rate far in advance of our ability to fill them.
Still interesting times ahead, maybe reconfigurable computing (RC) will take off but that will require far more general understanding of parallel computing by the SW crowd which is where Occam & CSP like languages have a role since HandelC can be both a runtime C lang and can be synthesized into HW but only if you think like a HW person. Celoxica (HandelC vendor) makes great claims for their tool being able to compile ordinary C code off the net (ie JPEG2000 core) into HW but its also far from affordeable except to people with ASIC size budgets (5-6 figures) which doesn’t help when FPGAs may only cost a few $. RC is in desparate need of new low cost tools to kick start some demand for FPGA coprocessor boards but I only see free HW tools.
Forgot to say that ASICs are dying, only 1/4 no of ASICs design started as per a few years ago due to n $M mask & R/D costs. As fewer and fewer larger ASICs get started, decisions are made only on the basis of what can sell in very largest vols, so interesting chips become scarcer.
In essence we are being forced into FPGA as we are forced into buying plane seats instead of buying our own planes which costs about the same as an ASIC design.
Forgot to say that ASICs are dying, only 1/4 no of ASICs design started as per a few years ago due to n $M mask & R/D costs. As fewer and fewer larger ASICs get started, decisions are made only on the basis of what can sell in very largest vols, so interesting chips become scarcer.
In essence we are being forced into FPGA as we are forced into buying plane seats instead of buying our own planes which costs about the same as an ASIC design.
Well, EPIC is a great ISA but its failure can be squarely blamed on Intel wanting to keep it an Intel only innovation thus giving no incentive for third party hardware vendors to suppor the chip via motherboards and expansion cards specially designed for that architecture in mind.
They still have more UNIX@ installations than anyone else in the world. If they can bring out chips that are even slightly competitive, they are not dead. Is anyone else sicking of how Linux rules the Earth? Does anyone even care about what is under the hood, Linux has been historically weak. The only reason Linux is so hyped is because IT managers found they could save money (and then get there faces posted in magazines as Open Source Saints) using x86 server equipment. However, Linux is becoming more costly in the enterprise, it is becoming just another UNIX. Of course it is becoming more bullet proof with performance gains now that it is being looked at as a UNIX replacement. (that is like replacing penguins with birds called lenguins, with black feathers on their backs, white feathers in front, that swim very well, have beaks, can’t fly, and can live in frigid temperatures)
Everyone is trying so hard to put the nails in SUNs coffin. SUN has done a great deal to embrace many open source technologies and has given software up to the open source alter. Instead of praise, it’s usually, “I like Linux and I have x86 boxes, I can use Linux and it has worked well for me. It is new and Open Source, it is better than anything! Solaris is old and stinky and so un-cool and .net is way more advanced than stupid old JAVA, that was so 2000”.
There are benefits to having different versions of UNIX. (not just different flavors of GNU/Linux)
JAVA is not a dead technology, but Microshaft would like you to think so. Even though C# is a dead ringer for JAVA. Only in an MS world could someone try to replace a technology that was well designed, incredibely pervasive and open with a whole new software framework (.net, designed only for Winders) and have people eat it up with such enthusiam and relish that they have to go out and learn C# right away!
I think at this point SUN should arrange for an Open/Closed model for JAVA, where classes can be added or updated when approved by SUN. Someone has to steward the software, and why not SUN since they did develop it. This way they could take constructive code and merge it, and they can also have direction over the software’s evolution.
Wasn’t all functionality currently offered by PCs suppose to be spread between other devices such as cell phones and internet enabled toasters and the PCs as such would cease to exist?
This predication may come to pass afterall.
I see this happening for certain computers, but by putting everything on one chip you’re also forcing me to purchase “everything” from a single company and that simply isn’t going to happen. No one company has the expertise to build the best CPU, the best GPU, the best sound hardware, the best networking hardware …. this is why things are relatively modular inside a computer – choice.
Transmeta’s Efficeon has an integrated Northbridge,
and its size still small, has less transistors and consume less power (comparing to other CPUs in its class)
http://www.transmeta.com/efficeon/architecture.html
yes, and transmeta’s cpu is probably neck and neck with a 486 dx2 66, 1ghz means nothing if it takes 10 minutes to do an “ADD.”
benchmark a Pentium 3 Tualatin 1.26GHz vs a P4 1.4GHz or 1.5. P3 kicks the $h1t out of the P4.
Ive used an entire pc (with flash hard disk) that is about 3×2 inches. They are very good and powerful for size constrained machines. AMD already make them. Heck, even most arm chips have some on cpu ram
Hmm, stupid comment about computers and works in the military. Are you sure you didn’t tell GWB that there were weapons on mass destruction in Iraq?
what does this comment have to do with anything? When have i ever said i was in military? does it matter? This somehow discredits my opinion and make yours more valid? LOL One thing is obvious by your posts….New Zealand needs some gene pool drift!
back to the point take a Look at DEC they had an impressive installation and later one of the best Processor architectures ever where are they now? stupid business decisions and inability to adapt to changing markets left them ripe for buy out and they faded into obscurity. You think your beloved SUN is immune to this? the signs are already there. IBM and HP are eating there lunch while SUN flails about trying to come up with something new. If perhaps you put more effort into thinking and less on forcing your opinion on others you might learn something.
Intel’s chips are not 15 times faster than an UltraSPARC IIIi right now. Stop posting bullshit. SUNW is starting make SPARC competitive again, like it or not.
Funny, wasn’t it just a very short while ago that the PowerPC chip Apple uses was also pronounced “DEAD” by Intel zealots before the G5 came out?
Yeah, I thought so.
You guys might be interested in this news…
ClearSpeed Announces CS301 Multi-Threaded Array Processor
http://www.supercomputingonline.com/article.php?sid=4753
“With conventional processor design, increasing performance has tended to come with real penalties in power consumption and heat dissipation, to the point where computing cannot keep up with the demands of today’s emerging applications and rapidly increasing volumes of data,” said Tom Beese, CEO of ClearSpeed Technology. “The CS301 is designed specifically to meet those needs with high performance, power efficiency and full programmability in C combined into a single chip. The CS301 is the first in a family of ClearSpeed microprocessors that we believe will challenge present day thinking by creating a world where scientists, bio-informaticians, engineers and content creators alike can have access to high performance computing anywhere, anytime.”
http://newsforge.com/newsforge/03/09/29/2234245.shtml?tid=3