Remember PA Semi? The company has just released, as promised, its first chipset. “They are full 64-bit PPC, support virtualisation, and would do Alitvec but that name is copyrighted by Freescale. Instead they do ‘VMA’. The three parts run at a max wattage of 25, 15 and 10W for the 2.0, 1.5 and 1.0GHz parts respectively, with typical wattage listed at 13, 8 and 6W. The individual cores are said to have a 7W max and 4W typical power consumption at 2.0GHz.” PA Semi was one of the prime reasons why Ars’s John ‘Hannibal’ Stokes doubted Apple’s reasoning for the switch to Intel.
These will have volume production in Q4, 2007, coinciding with Intel’s 45nm Penryn CPUs. Penryn should debut at closer to 3 GHz than 2 GHz, with 2.66 GHz being a conservative estimate. At 2.66 GHz, Penryn should score about 2800 SPECint. Now, as the article points out, these PA Semi chips should score comparably to PPC 970 in SPECint, normalized by clockspeed. That puts a 2 GHz 1682M at a stunning… 1200 SPECint. This performance will almost be matched by Intel’s ultra-low-voltage Core 2 coming out in Q2 which should get about 1150 SPECint at 1.08 GHz, with a TDP of 9W (1W typical).
So, can we leave the “switch” talk aside, since its clearly ridiculous? This CPU is not and was not intended for an Apple notebook. It’s got some seriously interesting features like on-die 10 gigabit ethernet, integrated PCI-Express, and a mesh inter-processor communications mechanism. It’s a very interesting chip… if you’re building a router or a supercomputer.
This CPU is not and was not intended for an Apple notebook.
I think PA Semi intended to sell them to Apple, but Apple didn’t intend to buy them. ๐
These will have volume production in Q4, 2007, coinciding with Intel’s 45nm Penryn CPUs. Penryn should debut at closer to 3 GHz than 2 GHz, with 2.66 GHz being a conservative estimate. At 2.66 GHz, Penryn should score about 2800 SPECint. Now, as the article points out, these PA Semi chips should score comparably to PPC 970 in SPECint, normalized by clockspeed. That puts a 2 GHz 1682M at a stunning… 1200 SPECint. This performance will almost be matched by Intel’s ultra-low-voltage Core 2 coming out in Q2 which should get about 1150 SPECint at 1.08 GHz, with a TDP of 9W (1W typical).
SPEC 2000 benchmarks tend to show huge differences in performance between processors, SPEC 2006 and other benchmarks show these differences are nowhere near as big. Also, x86 have always been designed for integer performance first, nothing else is. The FP figures are the ones worth watching.
Anyway, it wasn’t designed as a competitor to a high end desktop part, it would be best compared to laptop parts. That 25W figure includes a very potent NorthBridge. The best Intel parts currently use 35W, not including not including NorthBridge.
So, can we leave the “switch” talk aside, since its clearly ridiculous? This CPU is not and was not intended for an Apple notebook.
Apple were planning to use this processor at one point but then did the switch – which was not for technical reasons – I think the real reasons became obvious some time ago: Bootcamp and a big SRAM deal for the iPod.
SPEC 2000 benchmarks tend to show huge differences in performance between processors, SPEC 2006 and other benchmarks show these differences are nowhere near as big
Well, since PA Semi only gave projected SPEC 2000 figures, I worked with what I had. I’d be interested to see SPEC 2006 benchmarks for these processors. And other benchmarks can show even bigger differences. GCC compilation, for example, absolutely flies on x86. x86’s tend to have highly-optimized memory systems, since they have so few registers, and that gives them a big boost in code that deals with heavily-indirect data structures (graphs, trees).
Also, x86 have always been designed for integer performance first, nothing else is. The FP figures are the ones worth watching.
x86’s are designed for integer performance because that’s the most important thing in the desktop/workstation space. And that’s the crucial point here: nothing else is designed for the desktop/workstation space. And no, FP figures aren’t worth watching, because nobody cares about FP performance on a laptop. People only barely care about FP on the desktop, and will only care less about FP as more of that code is outsourced to the GPU.
Anyway, it wasn’t designed as a competitor to a high end desktop part, it would be best compared to laptop parts.
Those are laptop parts! Even shipping laptop parts embarrass the PA-Semi chip (again, in laptop-relevant metrics). All this talk about bootcamp and SRAM is a cop-out. The simple fact is that the x86 world has the best laptop/desktop/workstation CPUs, period.
Edited 2007-02-06 02:07
It’s just too hard to compete with Intel in the larger parts of the mobile space (12″+ laptops). AMD, who has been focusing on the x86 space for a couple of decades, is first starting to come out with parts that make sense next to Intel’s mobile offerings.
But even they don’t go toe-to-toe with Intel in price/performance yet, and Intel is doing a good job at staying a year ahead in process technology. If AMD can’t quite get there with IBM and Chartered backing them up, then PA isn’t going to make headway in the laptop market.
They know this. That’s why I’m inclined to agree with Rayiner. There’s a high-volume market out there that will soon dwarf the traditional PC market. It comprises increasingly capable ultra-mobile gadgets as well as larger embedded applications in the datacenter.
Imagine the datacenter in 10-15 years, composed of massive general-purpose compute nodes (virtualized mainframes running Linux) and dedicated I/O paths served by racks of re-provisionable controllers based on low-power integrated systems (probably running Linux). PA Semi probably wants to become the Cisco of enterprise I/O offloading.
While I agree with your statements about the x86 line, I have to say that I feel Apple made a mistake in this ‘switch’ process.
I had an intense crash-course in the MacOSX world at my last job and I found them to be rather weak machines. Whether this was because they were older machines and I’ve been so spoiled by 2Ghz goodness, or if they really are as slow as they felt – I don’t know, but they did have a different personality.
I’ve studied all the aspects (the Mach underpinnings, the tragedy of ‘Classic’, the bastardized unixness of it all, etc) – and I came to appreciate the amount of dedicated development time that went into making the total package. I personally don’t really like the environment, but that’s just my taste and I understand why some people prefer OSX over XP (and far too few consider Linux).
But for Mac users, I think it would have been best to maintain PPC chipsets – as there are many new PPC/variations coming into the market now. I’ve seen the poor performance of ‘Rosetta’ and feel pity for those that got stuck with new PPC laptops only to find they’ve been obfuscated. I know that it’s great that new Mac laptops can run Windows, but to me – now it’s just an over-priced PC laptop. If it had a unique CPU along with a unique OS – then I think it would be appropriate, but I’ve got a gut feeling that _someone_ will publicly crack OSX on X86 (correct me if this has not already happened, as it may) – so what’s the value in a x86Mac? Show me a laptop based on the STI-Cell chip and I’d take more notice (this being feasable or not is probably fanciful musing).
While Intel will probably be providing Apple with their top-line Core2 chips, there’s also AMD and others to consider (lets just say competitive options are better). I love my 2Ghz Turion64 (and I’m looking to upgrade CPU to an x2) But while Mac users can finally fight back against the “But it doesn’t run windows..” argument, maybe it was better that they didn’t. At least their machine was truly ‘different’, for better or worse.
Edited 2007-02-06 02:50
Your post is very beautifully written, but devoid of substance.
Classic wasn’t a tragedy in any sense. When its legacy cruft became too much to bear, Apple moved to a completely new OS (compared to Vista) which still managed to bridge backwards compatibility.
The “new variations on PPC” you mention are largely Cell processors which no one has tried to pass off as being suitable for general-purpose CPUs, or more powerful PPCs which still consume too much power to be laptop-feasible. Laptop sales outpace desktop sales, and vendors ignore this at their peril.
I’m disappointed that IBM refused to invest their own money in Apple’s corporate future, too, but realistically PPC entered Apple when Intel had no comparable offerings. Intel caught up, fast, but IBM saw no incentive to compete with Intel and even sabotaged Intel emulation in the G5s without telling Apple.
As far as the ‘can’t tell them apart’ argument is concerned, Macs have been running standard hardware for close to a decade. ADB, NuBus, RS422 serial ports and nonstandard analog video all died back in the Clinton administration. The silent argument from Apple on hardware is the same: OS and hardware run smoothest when designed by the same people. It’s the model that used to rule the industry, and until IBM and Microsoft it was unquestioned. Thanks to cloners and Windows, compatibility and driver BSODs are a real issue. Linux has to work twice as hard to resolve this problem because of the amount of blackboxing. Yeah, I’ll pay more for a system that doesn’t fight or guess to work.
This is about performance, not gut feelings.
There is a real logical difficulty with the argument, and its why there is such nostalgia for PPC. Fact is, you cannot at the same time describe a supplier as “running standard hardware for close to a decade” AND as designing its own hardware. The two are incompatible.
What Apple did with the move to Intel was stop designing hardware, and instead did totally what it had been doing partially for some time: start packaging off the shelf stuff. The result was a real loss of difference, and this is something some Apple customers found very hard to accept. But it was right. The market has voted on the question of unique hardware tied to an OS, and the verdict has been, too expensive for too little if any gain. It turned into difference for its own sake. There really was nothing to be gained from designing your own graphics cards or desktop peripheral interfaces. Or even, latterly, your own main board. You just end up being different for its own sake, at vast expense.
You could see the Intel decision was right as soon as the new towers came out. Where before the interior had been filled with humongous loud cooling, now there was space for extra drives.
Now, do OS and hardware run smoothest when designed by the same people? Don’t know of any evidence to that effect on modern hardware, because no-one is doing it any more. On the possibly related point, is there any evidence that OSX runs more smoothly than XP or major flavors of Linux? Don’t know of any.
And even if there were, its not clear what it would show. How would you separate out the two possible contributors of OS quality, and quantity of hardware supported, to assign blame or praise properly?
The one thing such evidence could not prove however is that designing the hardware and the OS is better. Because Apple is not doing that, and hasn’t for years.
On the possibly related point, is there any evidence that OSX runs more smoothly than XP or major flavors of Linux? Don’t know of any.
As somebody who has used Linux for five or six years now, I can definitely say that the hardware/software integration is a major plus for OS X that linux does not have. The benefit is not so much that the hardware can be designed for the software, but rather by shipping the hardware and the software as a unit, the software can be customized well to the hardware. When the same company that designs the OS specifies every chip that goes into the machine, they have a tremendous amount of flexibility in making sure the two sets of decisions are compatible.
Poor support of the hardware in any given machine is the single biggest thing that makes Linux complicated to use. It is also a major thing making Windows complicated to use, but at least Microsoft has the benefit of being the 800lb gorilla that everyone designs their hardware for. Consider something simple like putting a laptop to sleep. It’s a complete crap-shot whether it’ll work on any given machine with Linux, and not a whole lot more reliable in Windows. In contrast, suspend always works on Apple’s laptops, for the simple reason that Apple has enormously fewer combinations of hardware and software to test and support.
Apple could easily do the same with Linux, btw. There is no reason OS X is any more suitable for such a purpose than Linux, and Linux certainly has more overall hardware support. The difference isn’t the OS: it’s the fact that the software is developed with the hardware in mind, and the whole thing is shipped, sold, and supported as a unit.
rayiner stated what I was trying to get across much more coherently. If Microsoft were allowed to make computers, they’d be a hell of a lot more stable.
Show me a man who thinks off-the-shelf should be the rule for boxes and I’ll show you a man who’s never written a driver.
Fact is, you cannot at the same time describe a supplier as “running standard hardware for close to a decade” AND as designing its own hardware. The two are incompatible.
There’s a far cry between Apple restricting the specs for their computers and Microsoft hoping an unknown Taiwanese mobo plant builds an integrated board that doesn’t fry itself or defy a hardware standard. Soyo makes tons of mobos with inexpensive, commodity integrated ethernet chips which are notorious for overheating and self-destructing.
If either of us had a nickel for the number of mobos that duplicate video, audio and/or ethernet between the onboard chipset and their replacements, we’d be rich.
There is a difference, and it isn’t semantic.
Sorry, I jsut think that for the sake of Apple developers and customers, they should have held-on to the PPC line for a little longer. I’ll say again that I’m not a big fan of the PPC, but the ultralow power-consumption of the newer models is quite impressive (and that’s a major laptop benefit). The reliability of the PPC chip makers has always been a problem, and that was probably the deciding factor for the ‘Switch’, but it was a heck of a sacrifice.
Performance is important, but compatability’s more useful. The Switch finally brings new Apple customers to the same table (and plugs) as PCs (more internally that externally). Now they can Really run Windows, good for them – so why didn’t they just buy an HP/Gateway/Lenovo/etc? Apple controls the hardware – yes. But PCs have evolved to handle this diversity (maybe not all hobby OSes run on all laptops, okay). Windows is the MacOS of x86 (everyone insert your favorite MS joke here), but it’s got home-field advantage in peripherals and software. Apple has made all sorts of devices (and they totally rock, don’t get me wrong), but now they’re being abandoned. I wish I had a BeBox ‘Geekport’ connector for windows – it was a kick-ass concept and device, but the company controlled it’s demise. I fear the same fate for the loyal fans of Apple. (words from a BeOS refugee – we died on x86 too…)
x86’s are designed for integer performance because that’s the most important thing in the desktop/workstation space.
If you are doing video, image processing, 3D rendering, audio processing or even just playing games FP is more important.
And that’s the crucial point here: nothing else is designed for the desktop/workstation space. And no, FP figures aren’t worth watching, because nobody cares about FP performance on a laptop.
Today’s desktop and workstations are laptops, the concerns are identical.
People only barely care about FP on the desktop, and will only care less about FP as more of that code is outsourced to the GPU.
Given things like SSE haven’t been taken up as much as Intel would like that’s somewhat uncertain.
If you are doing video, image processing, 3D rendering, audio processing or even just playing games FP is more important.
Very few people do any of these things on a laptop. Moreover, these characterizations aren’t even accurate. Unless you’re working with HDR images, your image processing is going to be dealing with integers, not floating-point values. While 3D rendering has a substantial floating-point component, the integer component is probably still larger. 3D rendering code is enormously data-structure intensive, and scanline rasterizers are still integer-based. And in gaming, almost all the heavy FP lifting is done by the graphics card. That was why the “Dothan” Pentium-M was so popular with gamers, despite its poor floating-point performance.
All of the things you mentioned are more FP-intensive than your average desktop program, but it’s also important to keep in mind that program logic is fundamentally integer code, and aside from a few really specific computations (eg: encoding, signal processing), program logic will still make up a large fraction of your program. To use a concrete example, a 3D renderer will probably spend substantially more time loading, sorting, culling, rasterizing, and texturing triangles than it will transforming them. This implies that it’s important to have a very good balance between integer performance and floating-point performance. It’s fairly well-accepted that the G5 did not have an adequate amount of integer performance to balance its floating-point performance. Based on this CPU’s projected SPEC scores (~1200/2000 at 2 GHz), this chip looks even more unbalanced than the G5. Under real-world code, that is going to be a problem.
Today’s desktop and workstations are laptops, the concerns are identical.
Uh, no. Today’s laptops are laptops. People doing heavy FP crunching are still by far working on desktop machines.
Given things like SSE haven’t been taken up as much as Intel would like that’s somewhat uncertain.
Because nobody wants to do single-precision FP on the CPU when the GPU can do it so much faster! If there really was a demand for heavy-duty FP on the CPU, people would’ve complained a lot more vocally about the shortcomings of the P-M through Core Duo chips. Poor FP performance just about killed non-Intel chips in the K5 and K6 eras, but since graphics cards took most of the FP grunt off the CPU, people really couldn’t care less.
Edited 2007-02-07 20:20
> ultra-low-voltage Core 2 … with a TDP of 9W (1W
> typical).
It is useless without the chipset.
Take a look at Intel Tolapai (aimed for low power).
Single 32bit core with 13-22W TDP @ 600-1200Mhz.
If they had been in production a year ago, maybe then they MIGHT have gotten a look.
Plus, like mentioned, this is one single release.
Jumping on the Intel bandwagon is a “safer” and “more guaranteed” long term viable solution.
Edited 2007-02-05 23:29
I don’t think apple is going to change their decision of switching to intel just because a company that may go bankrupt tomorrow starts doing ppc clones. PA Semi just probes that the main reason to switch to intel was not power consumption; power consumption was just an excuse for the media & zealots.
It proves nothing of the sort. PA-Semi’s performance-per-watt isn’t very good at all, at least in terms of laptop-relevant metrics. The 2 GHz 1682M gets 48 SPECint/watt. The 2.33 GHz Core 2 gets 70 SPECint/watt. The 1.08 GHz ULV Core 2 will get a very nice 128 SPECint/watt.
Now, I can’t claim to divine Apple’s real reasoning, but performance/watt certainly seems like a reasonable case from an objective standpoint.
It proves nothing of the sort. PA-Semi’s performance-per-watt isn’t very good at all, at least in terms of laptop-relevant metrics. The 2 GHz 1682M gets 48 SPECint/watt. The 2.33 GHz Core 2 gets 70 SPECint/watt. The 1.08 GHz ULV Core 2 will get a very nice 128 SPECint/watt.
This comparison is horribly flawed. The most prominent error is that both the TDP number you quoted and PA-SEMI numbers are with both cores running, SPECint only uses one core, this alone would make this comparison moot but still there are other problems:
First of all Intel TDP are much closer to IBM et al typical power numbers whilst the 25W figure from PA-SEMI slides is under a thermal virus, the typical power quoted was 17W.
Now the 17W figure also includes power consumed by the I/O stuff, so please put a part of the northbridge+soutbridge power consumption into your C2D figure.
Last but not least Intel’s ULV parts are cherry picked out of the best binsplits of a certain process node whilst 1682M hits that power levels even under significant process variability. Vdd can be adjusted to trade between static and dynamic power consumption over the production to increase yield while keeping the processor in its power envelope.
Finally C2D is being produced on the most bleeding edge process by Intel whilst the 1682M will be fabbed, Intel process is specifically tailored for high performance MPUs.
The 1682M shows an exceptional performance/watt ratio and will be a significant threat to Freescale offerings in the high-end embedded space and maybe HPC. What makes it unsuitable for desktop/laptop space is:
– (almost) nobody is building desktop/laptops with PowerPC chips so much with them in that space
– it has bootloads of extra features you don’t want there, BTW that’s one of the reasons why Apple didn’t want to fiddle with Freescale’s 8641D
This comparison is horribly flawed. The most prominent error is that both the TDP number you quoted and PA-SEMI numbers are with both cores running, SPECint only uses one core, this alone would make this comparison moot but still there are other problems:
The comparison is valid because it establishes a lower bound on the SPECint performance/watt of each CPU. In general, both CPUs will achieve higher ratios, because SPECint will only cause the CPU to reach some fraction of its maximum TDP. This is both because it exercises only one core, and because it won’t use the FP hardware at all. For a ballpark estimation like this, we can assume the fraction to be similar on each processor, which would cause it to factor out, leaving the relative performance/watt similar regardless of its value.
First of all Intel TDP are much closer to IBM et al typical power numbers whilst the 25W figure from PA-SEMI slides is under a thermal virus, the typical power quoted was 17W.
There is a nice thread on RWT this week that talks about how Intel measures their TDP under a power virus, and the measured power usage under such code is actually less than the stated TDP.
Now the 17W figure also includes power consumed by the I/O stuff, so please put a part of the northbridge+soutbridge power consumption into your C2D figure.
That’s a fair point. Since we have actual per-core figures for the PA6T, let’s use that instead. So the 2.0 GHz 1682M would achieve 1200 SPECint at 14W, giving it a ratio of 86. The 1.08 GHz Core 2 ULV would achieve 1150 SPECint at 9W, giving it a ratio of 127. Better, but still not competitive with what Intel has now, much less what they’ll have in Q4 when this thing is actually shipping in volume.
Last but not least Intel’s ULV parts are cherry picked out of the best binsplits of a certain process node whilst 1682M hits that power levels even under significant process variability. Vdd can be adjusted to trade between static and dynamic power consumption over the production to increase yield while keeping the processor in its power envelope.
While that’s a fair point, Intel also has vastly more production resources to cherry-pick from than whomever PA-Semi outsources their fabrication to. If we’re talking about right now, I’m sure Intel has a lot more ULV Core 2’s than PA-Semi has limited-production 1682M’s to sell. If we’re talking about Q4, when the 1682M will be in volume production, well, then Intel will be shipping 45nm CPUs, and won’t have to cherry-pick.
Finally C2D is being produced on the most bleeding edge process by Intel whilst the 1682M will be fabbed, Intel process is specifically tailored for high performance MPUs.
Yep. But high-performance process capability is a major competitive advantage for Intel. Apple doesn’t care if the design would be competitive if fabbed on Intel’s process, because that’s not a realistic design proposition.
The 1682M’s performance/watt looks good if you actually leverage it’s extra features (fast FP, integrated I/O). However, if you just look at integer performance/watt, it’s not that great compared to Intel, and the absolute performance even at 2 GHz is really low. Now, obviously there are a huge number of customers for which this chip’s performance characteristics are absolutely right. No doubt PA-Semi intends to sell to those people. However, pointing to this chip as refutation of the reasoning behind the switch is silly. It’s a validation of that reasoning: it’s yet another embedded chip that’s not designed for Apple’s target market. It’s not easy to design high-performance CPUs for the desktop market, and if you’re not even trying, but instead repurposing chips designed for different markets, you’re never going to be competitive!
diegocg, I wasn’t suggesting Apple would switch.
How’d you get that from my comments? :-S
Does it run Linux? (I suppose it does, it’d be stupid to release a chip without OS support, and I don’t see apple/sun/microsoft releasing a special version for this chip…)
Edit: Yes, it does: http://kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=… ,http://lwn.net/Articles/219789/ , http://lkml.org/lkml/2007/1/29/16
Apparently it also could run AIX, although I guess Linux is going to be their main OS…once again, FOSS comes to rescue and helps enterprises to innovate new products in the computing industry
Edited 2007-02-06 00:54
Does it run Linux?
Perhaps I just don’t understand the technical details, but shouldn’t the question be “does Linux run it?”
(And I’m glad to hear that it does!)
Depends on how you look at it – it’s the CPU who executes the instructions, so it’s the cpu who “runs” linux. On the other hand, linux must support it, and when it does it “runs on” the pa semi cpus
very Zen.
I must haiku…
symbiotic bits
some hard, some soft, both running
together in time
Of course it runs Linux! In the future, everything will run Linux, and many devices will only run Linux. You can hardly release a new CPU architecture today without Linux support, mainly because that’s the easiest way for the hardware engineers to test and characterize the development parts (besides relying on Cadence simulations). Unless you’re already an OS vendor, you don’t really any better options.
I’m obligated to be glad that it also runs AIX, but nobody really runs AIX unless they have to (i.e. they own a System P and don’t want to pretend it’s a really big Mac).
I thought AltiVec was an Apple term, not a Motorola/Freescale one.
Motorola called it AltiVec, Apple called it the “Velocity Engine”.
Motorola called it AltiVec, Apple called it the “Velocity Engine”.
Doh! Of course.
Moto/Freescale = Altivec
PA Semi = VMA
IBM = VMX
Apple used “Velocity Engine” not only because of trademarks but also because it’s vendor neutral – which was neccesary when using both IBM and Moto/Freescale chips in different products.
The x86 camp at least has this in-line due to their cross-liscensing agreement. But it really is kind of wierd that the PPC consortium can’t get their members to agree on one name – or at least provide for an official vendor-nuetral term free to use for all its members.
The x86 camp at least has this in-line due to their cross-liscensing agreement.
Not really. Whether it’s because they’ve not cross-licenced the names or not (and presumably they haven’t) several features of x86 chips differ between AMD and Intel – such as 3DNow! and MMX, the No eXecute bit (which has a different name on Intel x86), and the two commands which send the new x86 processors into “fully-virtualizable” mode.
It’s really silly that there are now four different names for the same instructions. If competitors like Intel and AMD can agree that SSE is called SSE, then you’d think that the PowerPC alliance could agree on one name.
That’s because Altivec is a trademark ™ of Freescale.
This being pitched to be used in the PegasosPPC Open Desktop – http://www.pegasosppc.com/odw.php
It would provide a grunty processor, and coupled with a decent video card, it would be a good desktop computer; coupled with OpenSolaris or FreeBSD it would be quite a nice setup.
“It would provide a grunty processor, and coupled with a decent video card, it would be a good desktop computer; coupled with OpenSolaris or FreeBSD it would be quite a nice setup.”
…if I wasn’t for the very fact that its (at very least initial) low volume would make it extremely expensive compared to any x86-based chip.
Well, it can’t be any worse in terms of price/performance than current PowerPC implementations, and considering that IBM will most likely fab it, we might see some vendors jump on board – lets remember than Sun never closed the door to the idea of selling PowerPC hardware.
While I mostly agree to what others say here, sometimes some views as Rayiner’s seem a bit of narrow minded. I know you were talking about Apple switch, but all that discussion just seems like there is only desktop/notebook market.
We designed our own CCD astronomical camera, based upon Ubicom chip. Do you know, that that company sells probably hundreds of thousands of those “network cpus” to specific niche market and does well?
We design also kiosks. We use min-itx boards. Why not to use VIA C7? It either does the job, or does not.
Big Lexmark printers we use here do sport PPC CPU. How many printers is out there? Would you put latest and greatest Intel Core Duo inside? Surely not, if you don’t want your printer to render for you in the mean time ๐
I think, that with UMPC market coming, with today’s rising amount of devices being network aware (printers, routers …), there is enough of market for PA doing well imo. Or you think their management are so stupid, that they only had Apple market in mind, when they started, and now will go bankrupt?
Also statements like “Intel…, coming out in Q2 …” sound kind of funny. If you build and plan your solution, you have to have something in your hand, now, or even better – pretty much ahead ๐
Cheers,
-pekr-
Yes, I’m aware there is something outside the desktop/laptop market. We have some applications at work (doing real-time signal processing in radios) where a chip like this would be ideal (low-power, fast FP, vector instructions).
But, I was talking specifically in the context of Apple. For Apple, there is nothing outside the desktop/laptop market.
As for Intel’s scheduled product releases, I should point out that this chip won’t be shipping in volume until Q4. Again, if you need a couple for a specific application, that doesn’t matter to you, but if you’re Apple, a chip doesn’t exist unless you can ship a million of them per quarter.
The question is:
Now that Apple has gone x86, who will build computers using these chips ? I mean… who would build enough computers to make this company survive ?
Simple this chip is aimed at high end networking equipment and the such, for example the system includes encryption engines which allow it to encrypt Gigabit Ethernet in 3DES at wirespeed. It also has hardware support for IPSEC and TCP offloading.
Sounds perfect to me to create a dedicated firewall system like a Cisco PIX. After all nearly everything you need is in the SoC (system on chip).
* 6 Ethernet interfaces (2x10Gb and 4x1Gb)
* Encryption engines to allow wirespeed encryption
* 8 PCI-Express controllers
* CF and IDE controller
* UART
* Dual memory controllers
With all that in the one piece of Silicon there’s very little extra needed in way of additional chipsets/silicon etc to get a fully working system.
Interesting thing about the virtualisation support is that you can partition the PCI-Express controllers between the domains. The documentation says that up to 8 guests are allowed, in which case you would be able to assign a specific pci-e controller to each one, nice compartmisation feature if you ask me.
The core’s themselves have a 7W max power consumption so all the other stuff on the die is eating up 11W’s of power (2 cores after all).
Edited 2007-02-06 10:25
No, the cores use 14W (each core uses 7W).
>> No, the cores use 14W (each core uses 7W).<<
Which is what i said in my post 7W per core, so given that it has a 25W total power usage @ 2GHz that’s 11W for all the non-core related circutary on the SoC.
apple has an embeded IPod and who knows what else in the future etc, that alone is perfect for cheap low power PPC SoC and/or a PPC based kiloCORE FPGA for instance.
wireless 11g+, 11N and 11s (the new MESH networking spec) that are also NAS and indeed will be or at least could be self contained network computers that could run the likes of rebol/core/view and other 3rd party apps remotely.
theres also the option to include E-Sata and DVB-T/C/S not forgettting the new DVB-H(2) inside these devices that your average end user could then buy as their home lan expands to accomadate the comeing IPTV revolution if you beleave the likes of the latest digitalmediapublishing.co.uk IPTV edition.
http://www.digitalmediapublishing.co.uk/media/files/iptv-news-analy…
http://www.digitalmediapublishing.co.uk/analyst-publications.htm
http://www.ruckuswireless.com/products/mediaflex_router/
“The Ruckus MediaFlex NG is the first-of-its-kind wireless multimedia service platform that reliably distributes multimedia content over standard 802.11 Wi-Fi to every corner of the home.
The Ruckus MediaFlex NG combines innovative, patent-pending smart antenna and traffic management technologies to break down the barriers that have prevented a single Wi-Fi network from simultaneously supporting voice, video and data in the home.”
plenty of potential future profit and innovation there
if you just look for it and start making the devices people want; weres the external SD/HD grade AVC/H.264 video encoders for the coming end user content creation for instance.
yuan make cheap hardware Mpeg-4 ASP (aka divX/Xvid) Encoder devices for instance, but not AVC encoder devices yet even though theres more than capable multicore FPGA’s (like kilocore etc) and AVC/h.264 IP exist out there (like the coreAVC encoder/decoder codec) that can be licenced and ported to these things, when will we see these appear for perchase though, thats the question?.
Edited 2007-02-06 18:13
What about a little mini itx computer running Haiku/BeOS… with this processor…
Mmh.. Gahh.
The only reason I read through the comments was in the hope that somebody knew.
All I found was arguments about x86 vs. PPC and apple’s switch.
Boring.
Apple switched to Intel so they could sell cheaper computers in larger quantities and ride the ipod wave. It’s working beautifully. Future Shop, who I work for now sells 25% Apples.
The P.A.Semi chip is clearly aimed at the embedded
market where most PPC sales lie. Being a SOC with such a low power consumption and high performance
makes very attractive it in this space.
If you still think its for a PC look for the VGA port on the board.
As far as x86 vs. PPC, the fastest computer on earth is a PPC. So there.
If anyone catches the price somewhere please reply.