So, we have Intel and AMD. These guys are doing pretty well in laptops, servers, and of course desktops, but when it comes to mobile devices, they’ve so far been unable to adapt the x86 architecture to the stricter requirements that come with those devices. ARM, on the other hand, pretty much owns this market at this point. And you know what? It’s time for Intel and AMD to get worried – really worried. ARM has just announced its Cortex-A15 MPCore chips – which will reach 2.5Ghz in quad-core configurations.
As most of us will know, x86 pretty much owns the laptop and desktop markets, and is doing pretty well in the server business as well. When it comes to the fast growing mobile market, however, x86 is out of place, and can’t compete in any significant way with ARM’s chips.
ARM, however, has its sights on more than just the mobile market – they want a piece of the server pie, with their low-cost, power-efficient architecture. Today, ARM announced its Cortex-A15 MPCore chips, which will come in single, dual, and quad-core configurations at speeds of up to 2.5Ghz. Virtualisation is built right into the server variants of these chips, and they can support up to one terabyte of memory.
“The launch of the Cortex-A15 MPCore processor marks the beginning of an entirely new era for the ARM Partnership. It brings together more than 20 years of ARM expertise in low-power design with a host of new and very aggressive high-performance technologies,” said Mike Inglis, EVP and GM, Processor Division, ARM, “The Cortex-A15 MPCore processor will become the next major step along the industry’s energy efficient computing roadmap and open up a wide range of new application possibilities for our Partners.”
I just do not understand why manufacturers like Dell, HP, and the like do not design and ship both laptops and desktops with multiple ARM processors, instead of shipping the same old boring standard boxes everybody else ships as well. Can you imagine a laptop with, I don’t know, three or four dual-core ARM chips, running at 1.5Ghz each, with the ability to shut down each of these processors if they’re not needed? What about a workstation with eight of these quad-core babies, powering on when needed?
Probably a pipe-dream for now, but ARM’s aggressive new line-up of processors indicates the company is dead-serious about conquering markets it used to have no place in. They’re aiming at servers now, but I wouldn’t be surprised to see them work their way towards desktops and laptops (other than netbooks) as well. It just makes sense.
I am expecting it with low power QuadCore
I’d definitely love to play with a quad-core ARM processor if there was a good, consumer-oriented mobo available. There’s just so much potential in ARM and it seems they’re gaining more and more momentum.
Sure, x86 architechture trumps ARM (for now!) in terms of performance, but ARM is really power-efficient, needs less cooling (less fan noise!) and there seems to be a lot more innovation going there, so it’s not far-stretched to assume that in a few years’ time there will be desktop computers powered by ARM for sale in local shops.
I for one am excited!
It would be great to have a mobo with 4 ARM 2.5gz QuadCore, in fashion SMP way.
Low power, great performance.
I’d love a quad socket quad ARM based desktop, preferably with an AMD GPU!
We really need to get real consumer products made with ARM and Linux or potentially Haiku once it gets hammered out. Laptops, desktops, tablets…
Ooo! An 18″ dual quad arm laptop with SSD and ruggedized all aluminum case to cool it passively!
We have the software to back it up already, we don’t need MS to bless it, just build it with consumer input like the Open Pandora project! Everyone I show the OP to has a homicidal desire to own one!
I want to see these things in a traditional desktop or laptop machine. Or netbook. If I had the money, and the features are nice… I’d buy it. And install Debian on it.
why not ReactOs???
http://www.reactos.org/en/newsletter_76.html#sec2
Personally I prefer NetBSD and cross my fingers for an opensource 3D stack.
Edited 2010-09-09 15:40 UTC
ReactOS still relies on a number of MS DLLs in the end to run a good-bit of software. It also has the issue that most Windows software is 32-bit x86 only, with some also providing 64-bit x86 variants. ARM doesn’t sport x86 compatibility, and never will. Now if you want to put an emulator there too – that’s one thing, but that’s not ARM.
I didn’t know ReactOS even ran on ARM, and I don’t know why it would since practically all Windows software won’t run on it as TemporalBeing pointed out. Kind of defeats to whole point of running it in the first place, doesn’t it? Not to mention, the last time I tried ReactOS on x86 hardware or a virtual machine (one of the latest versions of the OS actually), it still had a long way to go. I just gave Debian as an example, but obviously there are other choices out there. The ones I was thinking of were the BSDs.
“I just do not understand why manufacturers like Dell, HP, and the like do not design and ship both laptops and desktops with multiple ARM processors”
For the same reason they don’t ship linux as the preinstalled OS on their machines….They don’t have the b***s to take risks
Each revolution is risky and it can be a fail, so why not mass produce cheap computers in China and sell them for thousands of dollars in the west? After all the new capitalism is all about making money and not about making new technologies for the benefit of the mankind.
What was the old capitalism about?
Mercantilism, I think.
http://en.wikipedia.org/wiki/Mercantilism
Mercantilism was still about making money.
Maybe the original poster in this subthread does not understand what the “capital” in capitalism stands for?
…WOW!!!
This to me just raises the question: what will Microsoft do? Sure, they have had a good run on the desktop and even on servers but that was all during the x86 days. True MS does have Mobile/CE, but will they be able to scale that up to a server platform? All I know is that MS will need to come up with a solid plan in order to compete in the very near future.
Considering there has been versions of Windows NT running on Alpha and even PowerPC, Windows XP and Windows Server 2003 (and I think even Server 2008 R1) running on Itanium … I don’t think it is a huge problem for them to port windows to another platform if it becomes popular.
Great, Windows with no software. The thing the Linux software world has that the Windows software does not is the source. Any single distribution can port all the software they want to. In Windows world each company will do a cost analysis to see if it’s worth doing. This causes a chicken and egg problem, why port to a platform (ARM/Windows) with no users, why buy into a platform with no software. Etc etc etc. In Linux world, the repository is just compiled, fixing anything that comes up on the new compile target. Windows is wedded to x86 until death do they depart, unless .NET hype is to believed, and lets not beat about the push, .NET is just JAVA all over again. Best case for them, all Windows apps are byte coded, but the platform they will be competing against will have native apps….
.NET hype is very real for ASP.NET applications and website. Pretty much every job around my Area in England is either ASP.NET.
These chips seem to be going being pushed for servers, where most ASP.NET applications are run, if the cost of the machines and the saving via power (which is very important for the organisation I work for since they have a large “green drive”) is worth it, I can see organisations moving to this.
Yes, for the server, it’s different. I was talking about the desktop. On the server, you can throw hardware at the problem, there is one server. On the client, you can’t throw hardware at the problem as there is N clients.
[The following is a bit of a rant, and probably should be taken with a pinch of sult]
But even on the server, native is still better as it means cheaper servers, but then it comes down to, is the server cheaper enough against the extra development-time/cost? I think byte coded can make sense on the server, but if you have the source, you can just compile it native for the hardware, which is always going to be better. Facebook php compiler is a example of doing this (compiled over interpretted). Yes byte coded languages compile to native, and sometimes even keep the native compile for reuse, but the compile must always be done quickly, which means full optimization isn’t possible. The move from compiled-to-native language to compiled-to-byte coded languages is often paralleled to the move from assembler to compiled languages, but it’s a false comparison. The jump in productivity vs the loss of performance are nothing like the same. What byte coded languages give you is the ability to run the same binary on multiple platforms, but in an open source world, we can just compile. It is cheaper to develop stuff in byte coded languages, if there is no competition, but when there is, the fastest one that gives the features required wins (assuming no cost all round). I see of the .NET move as mostly a critism of C++ on Windows, especially with MFC. If a nicer compiled language comes along (Google’s Go?) the picture will change again. Or of course proper compiling .NET to native becomes common, but then you are back were you where with each company having to decide if it’s worth doing. It’s going to be interesting if MS try and use .NET to aid moving to ARM. It means they are going to be competing with something native on Linux against .NET things on Windows. What I guess MS will do, is native compile all their stuff, but tell everyone else to use byte code .NET, and tell people to use Mono so the same thing runs on Linux. Running on regardless of OS or chips. They can then try and use the whole chasing tail lights things to try and stay on top (like they did with WISE on Unix). Failing that, there is always use legal against Mono for anything “the promise” doesn’t cover. Could be very Interesting times! 🙂
2k8 R2 also runs on itanium.
WinCE supports ARM too, so they are familiar with the architecture. It wouldn’t be a big leap for them given the design of WinNT and its HAL layer. However, it would screw them out of all the software that runs on Windows, so as another pointed out there’s no incentive as there is no software base to keep them in the lead, especially when Linux provides a huge software base on every processor it runs on, which is well over 100 different processor (e.g. x86, amd64, arm) and processor variations (e.g. i386, i486, ppro, p4).
Sure, but will third parties port their software over Windows/anything else than x86 ?
They didn’t for the Alpha, so I don’t see why they would do it for the ARM.
They could make Windows 8 support fat binaries, and refuse to give certain major features to applications that aren’t fat, so that developers will compile for ARM to take advantage of Windows 8 features.
Linux (OK, and android…) will have some free time to show something before MS catches up!
afaik the windows nt kernel is very portable and they probably already ported it to arm, the real problem is not the os itself, but all applications would need to be ported too
Virtualisation. If ARM really takes off that way, MS will not face too many problems. They’ve prepared for this, trust me.
I don’t think that’s really a solution, Thom. Running an x86 application in a virtualized environment already takes a performance hit even running on an x86 architecture. Emulating an x86 environment on a non x86 architecture involves vastly more overhead due to the amount of instruction translation that goes on. The performance hit would be huge, something we already see while using QEmu.
If MS want to compete with Linux and the BSDs on ARM, they would need all of their server software partners to port their apps to ARM, no doubt about it.
This is an area where FLOSS will really shine.
AFAIK that’s exactly what the Alpha version of NT used to do, emulate x86 for all those non native apps. In fact they have something similar for old 16bits apps and for 32bit apps on 64bit systems (the WOW stuff).
That’s also what Apple did when they transitioned from PPC to x86.
For MS it would be a way to wait while they push developers to use .Net to write portable apps.
They definitely have the expertise in house so it’s just a matter of whether they’re interested in helping ARM succeed.
Developers, like those that produce such popular apps as MS Office and internet Explorer?
Sounds good in theory, but I don’t think Microsoft will ever convince Microsoft to re write those aps in the Microsoft .Net runtime or use standard Microsoft controls and interfaces.
MS inhouse apps aren’t the problem, they have the source for those and there’s nothing to stop them recompiling them for a new architecture…
It’s all the third party closed source apps, which also happen to be the biggest selling point of windows. Convincing companies to port their current apps would be extremely difficult, and then theres loads of effectively abandonware applications which would never get ported.
This is a problem open source doesn’t have, since as long as one user who’s handy with a compiler wants to use an application, it can be ported to the new platform.
Minor nitpick: it’s all the third party closed source apps, and as well all the third party closed source drivers.
Second minor nitpick: This is a problem open source doesn’t have, since unlike Microsoft, open source has all the source code available to compile by anyone who cares to.
http://www.linuxfoundation.org/node/4641
As it turns out, all of the source needed for a Linux distribution for ARM is even available to ARM themselves.
Although you’ve got some great points there, it wasn’t MS that developed the 32bit x86 emulation layer for Alpha (called FX!32), that was Digital. I’ve yet to find a performance comparison of the apps that ran in this emulation layer but I have found some very positive remarks on NT’s own performance on Alpha. I do know that much of NT’s 64bit development was originally done on Alpha systems, due to Itanium not being available until much later than expected.
FX!32 performed, IIRC, about as well as the fastest Pentium Pros at the time.
The problem is, this was on an Alpha running at well over twice the clock speed of said Pentium Pros.
Different things here.
WOW is not emulation. The x86 chip is running those instructions as is. The WOW OS calls are wrappers to the true OS calls. So we can forget about this one as it’s not relevant.
Apple’s Rosetta is something quite different. The Alpha NT stuff was similar.
http://en.wikipedia.org/wiki/Binary_translation
All of which is clever, but just not as good as having the source and compiling natively. And if Windows applications are all .NET it is only going to make the platform seam even fatter and slower then Linux.
And when stepping from 68k to PPC !
Apple also did the same when they transitioned from m68k to ppc…
However what these 3 examples have in common, is that the processor doing the emulation is considerably faster than the one being emulated, and incurs a significant overhead in doing it.
ARM on the other hand is designed for power efficiency, and while the newer chips may be performance competitive they certainly don’t have a massive speed advantage over x86 like alpha did. Also the power efficiency is massively reduced if you have such a high level of overhead as emulation.
Or the rumoured Windows successor which is built with a managed code userland.
In the case of Windows, not only the applications, but also the drivers. That is the ultra-critical bit.
Linux has source code for its drivers, as does Mac OSX et al.
Windows doesn’t. Microsoft leaves it up to the equipment OEMs to write the drivers for Windows, an as a consequence, Microsoft doesn’t own the source code.
Not only is there a problem in getting a whole raft of other companies to port applications to Windows on ARM, it is further compounded by having to get a whole raft of equipment makers to port drivers to Windows on ARM.
In addition consider that printer, scanner, your phone and your camera … all of which you now use with Windows … they all came with drivers on disks (probably CDs) … x86 drivers, x86 binary-only drivers. They won’t work with any new Windows on ARM computer that you buy.
In addition, all of the Windows applications that you have already bought (some of them very expensive) … they too have been bought as x86 binaries. You won’t be able to mover them to your new Windows on ARM machine, you will have to buy them over again.
Oh oh.
Edited 2010-09-10 01:31 UTC
Worst case if they really can’t make any other legacy-windows-strategy work: They have prepared the .NET ecosystem as a fallback, throw Singularity or something similar onto ARM and just make .NET the new primary development model. Sure that loses them the whole legacy audience, but they would have a compelling offering from the get-go anyway.
More likely, if ARM really takes over completely, just port NT (it is perfectly portable), JIT x86 into ARM code. Nothing performance-intensive would work, but most critical legacy stuff is already old and shouldn’t require too much performance.
Think about it… what mainly run on Windows Server? Microsoft will be keen to compile IIS, .Net and SQL Server for the ARM platform. All Microsoft software will receive the ARM version pretty soon. If ARM is successful, expect other major player like IBM and Oracle to create ARM version of their Windows program too. I can’t just think about any possible obstacle for Windows onto ARM in the server market. Its already standardized and much more tight than the desktop market.
About a x86 JIT translator… It can be done, but I’m not sure how much it will be efficient. Today’s code compilers produce is highly optimized for the current generation processors execution pipeline.
Indeed, Microsoft building a Windows Server version for small ARM blades would be an interesting first step for them. Now that you have brought it up I can’t help thinking that this would likely be the first sign we could see. Physicalization is a pretty big movement in the server market these days, and relies on small and cheap machines…
Food for thought. It certainly is more fun discussing Microsoft these days when they aren’t quite the evil juggernaut they once were
FX!32 used a different approach. It was a profiling recompiler. Not a JITter, it instead analyzed what code was doing, and recompiled it as needed to improve performance.
While I know that there are devices with a few ARM chip in them, it is not clear whether you can run one OS with multiple ARM CPUs (more than one socket). AFAIK there has to be hardware support for that (as in ARM has to design the chips with support)
And anyways, A15s are coming 2013 .. YEAH!
Apple is probably expanding iOS to run normal laptops by then. I think at some point OSX with only be on the PRO line. ARM chips will have extra hardware for demanding tasks like video encoding.
about low energy consumption on a high performance desktop PC? Come on. I have a Geforce GTX480 which alone takes how much power? I don’t know but it’s a lot.
So would I care to replace my Corei7 with an ARM CPU to save a few watts? I don’t think so.
Sure it would be a perfect chip for laptops and tablets but not anything near high performance servers and/or desktops.
To talk server side, virtualization already solves the power consumption there.
You’ve some great points there but I disagree with you’re analysis of virtualization solving the power issue server side. Not only will ARM chips use less power than their x86 counterparts, they also now support hardware virtualization extensions giving you even more savings in the power consumption field.
Personally I can see a huge amount of interest in ARM based servers, for the above two reasons alone. If VMWare port their stack to ARM, I can think of quite a few companies that would be very interested in using them for their Linux servers. If not, there is always KVM and/or Xen.
I think that servers would care. It might just save a few watts per chip, but that adds up. A large server with many processors could save hundreds or thousands of dollars a month in electricity.
The computer enthusiast would probably value the low initial cost of the processors. It would be possible to build a computer with mutiple processors that out performs an I7 at lower cost.
Just to clear some stuff up, the electricity you save is not just from the cpu’s, but also any CPU that doesn’t as much electricity will also not generate a lot heat, which means less cooling is necessary, less cooling also means less electricity used.
We’re on the cusp of the many-core era. Putting aside supercomputers from the likes of Cray, a consumer, right now, can easily build a machine with two Magny-Cours, for a total of 24 execution cores in one box. And Intel has unveiled some interesting prototypes like their 80-core CMP and Larrabee.
Right now, we’re still interested in single-thread performance, and adapting to multicore is difficult. But looking forward, we expect multicore performance to dominate. Assuming you have unlimited parallelism available, it’s best to cram as many simple cores onto a die as possible for a given power budget.
So rather than a handful of fast x86 cores, we’re going to see CPU packages with dozens or hundreds of simple cores like what ARM puts out. Instead of 4 Intel processors in 130 watts, you get 130 1-watt ARM processors in the same space and power budget.
This isn’t about conserving power. It’s maximizing compute per watt.
ARM processors aren’t really just low-power. They’re low AVERAGE power. At 32nm leakage (idle) power actually dominates. ARM’s strength is and has always been having exceptional computational efficiency to blast through work quickly and then SHUT OFF. With the Cortex A9, the granularity has been refined from while-core down to individual computational functional units in a core. ARM instructions first require less power to decode and execute, because it’s a RISC processor and the decode is basically nil, and more computationally efficient because things like predication are built into every instruction.
For a frame of reference, there was an IBM supercomputer designed a few years ago that was made up of, basically, G4 processors. The thing was, the interconnects between processors were the dominant efficiency bottleneck. To pack more compute power into the same space, they underclocked the processors to 800MHz. The cores were more power efficient, so for they could build a bigger supercomputer, and get more compute power for the same power budget.
…that they ARE making some inroads into the server market? This should shore up that gain and add to it.
Good for them! Gotta love the hardware competition… push each other a little and see what comes of it.
Intel buys ARM?
If they’re going to be competitive on price, if anyone is going to make them in sufficient numbers to matter, etc, etc..
The 1ghz barrier still for practical deployment is still around on most ARM implementations… and since Intel and AMD both have caught up to and/or surpassed the mhz myth, one has to question if ARM’s reception in the ‘real world’ of computing isn’t going to be much akin to VIA’s.
You remember VIA, the company that SHOULD have pwned the netbook market completely with their EXISTING Samuel and Eden processors from day one, and yet continues to fade away like an old soldier? (ONLY people who continue to use them are the nutjobs who build their own car nano-itx and pico-itx MP3 players in their trunks?)
Price I think is really going to be the make or break for this — I mean right now a ARM-8 OMAP3530 will set you back $120 a pop for an order of 100+, We’ll be lucky if a quad core 2.5ghz ARM doesn’t end up costing 300-$400 a pop wholesale for mass orders with NOTHING else on the die… and I very much doubt for all the ‘RISC is faster per clock’ nonsense such a chip would be faster than say, a 1090T or i7 870, particularly in the memory bus and device access department.
You also just have the stigma of ARM — it’s been a niche market processor FOREVER, and because ARM doesn’t actually MAKE chips and just licenses designs for others to build, they lack the marketing ‘oomph’ to break out of the stereotype of being “a goofy toy processor for handheld devices” with anyone other than the die hard geeks who find linsux useful as a desktop OS (the mind boggles).
Now, the server market — THAT could be a real place to make inroads, but I think that’s really going to hinge on Price Per Unit, Price to Operate, etc, etc… Personally I don’t think it’s going to deliver on that any better than the six to ten dozen other alleged ‘market pushes’ ARM is supposed to have had the past DECADE.
Right now it’s a “ooh look, we have a design and a few prototypes, but since we don’t actually manufacture chips ourselves it will be five or six years before anyone makes these for retail — at which point they’ll be just as irrelevant in those markets as the 1.4 A-9 is today”
Edited 2010-09-09 17:13 UTC
Niche market? I can assure you – more ARM chips are sold than AMD and Intel sell chips combined. In fact, I’d hazard a guess ARM chips outnumber x86 AT LEAST 2-1, maybe even 3 or 4-1.
A niche doesn’t necessarily mean small.
All it means is highly specialized and specific to a specific role.
ARM chips are niche products, though, it is a very large/common niche.
Modern “superphones” are not much more specialized than desktop PC’s.
Lets face it, ARM is a huge player. ARM is starting to challenge Intel on bigger devices, and Intel is starting to challenge ARM on the smaller ones.
My long term bet is on Intel – ARM has owned the market because Intel never tried to compete with them directly (because ARM’s “niche” was not that interesting). Currently, that “niche” is the one that has all the future and growth.
ARM is a huge player in the mobile phone space, and fairly large (but not dominating by any means) in other integrated systems. They are not a huge player in any other markets.
Also, There is a difference between “starting to challenge” and “wanting to challenge” Intel.
ARM has made some progress on challenging Intel with netbooks, but does not exist in the server space, or even desktop space.
The sad thing is, ARM used to be in the desktop space – that’s actually where they started out, and they had a quite competitive CPU… that wasn’t marketed well, and the computer around it wasn’t great.
I don’t see mention of 64bit addressing. If this is a 32bit processor only it’s DOA for general processing. High capacity ram is dirt cheap, why not be allowed to use it?
They’ve got an MMU extension that does 40-bit physical addresses. So it’s a 32-bit virtual address with 40-bit physical addresses.
Interesting…this core i7 I have here is 48 bits virtual and 36 bits physical, a 12 core phenom shows 48/48.
By extending the physical and not the virtual only solves a small part of the problem. It still means a lot of code I have won’t run on that a-15.
Edited 2010-09-09 18:51 UTC
Sounds like Intel’s PAE extensions. We all know how “wonderfully” that worked out.
http://dir.filewatcher.com/d/Fedora/i686/System%20Environment/K…
Intel and AMD should be worried, because a relatively small company without their own manufacturing capabilities announces their first high-performance design, due out in 3 years?
ARM has an uphill battle as far technology goes.
They only have 32-bit chips, so addressing more than 4GB requires PAE-type extensions, with an associated loss in performance.
Putting multiple chips on a board is quite different than putting multiple cores on a single chip. You can’t used shared caches, which will greatly enhance the importance of a good cache-coherency setup, and the interconnects are much slower.
Including virtualization, I think, is a waste of die space, thus a waste of power. This chip seems well suited to the “physicalization” trend, where instead of heavy-duty systems running multiple smaller virtual machines, you rely on many small machines to run small tasks. Apparently, this is the way to go if you are running many, many light-weight tasks. Virtualization hardware also isn’t something that can be shut off when not in use, like you would an ALU or FPU.
This chip sounds cool, and while I doubt it really will make much headway into server or laptop markets, it’ll definitely serve to keep Intel out of phone/handheld markets, at least for a few more years.
Agreed, this ends up being more a stop gap processor with a short life before a real 64bit one comes out.
To get into the future tablet/netbook market and more importantly the low power server market they really need that 64bit…
Not only that, but there have been multicore core low power MIPS parts, even with 64bit versions of the architecture… and none of them have made to the commodity market.
The markets that may end up adopting these sort of ARM parts are not the ones most of the posters in this thread think of.
It makes sense, when all you know how to use is a hammer… every application looks like a nail to you. So most people here know computing at the personal user lever, so every processor looks like it is supposed to go into their assembled together white box system in their parent’s basement.
I agree, I don’t see why people are so eager to wait on ARM, even though MIPS is already 64bit, has also a 16/32bit instruction length mode (though I don’t know if it’s compatible with the 64 bit address space).
ARM is simpler to decode than x86, so there is probably a power advantage, but it’s not simpler to decode than MIPS..
Yet I don’t see many MIPS server, so I don’t see why ARM would be more successful..
MIPS servers have been and gone…
Look for old SGI hardware on ebay, most of that was MIPS based. I have an SGI Onyx, capable of holding 24x 64bit MIPS R12000 processors, and SGI used to produce one which scaled to 512 processors.
mips never really got traction. One important telltale sign that ARM is doing well today are that more and more open source libraries contain not just optimized assembly paths just for x86 but also for ARM. apple has done us all a great service by getting ARM on the map for more general purpose processing.
And I’d love to see mips come back. More than a few gaming devices do MIPS (psp, dingoo, etc). It seems to be a decent architecture that historically has scaled better than ARM.
Edited 2010-09-12 04:43 UTC
Stop thinking desktop.
Put this in a phone / tablet without freakin’ Windows and you will have something faster, more productive, more fluid and responsive than 10 years of x86 + Microsoft.
Yeah, Intel should be worried because x86 hardware sucks. The x86 tablets are laggy as crap (Intel graphics). Battery sucks. Heat sucks. Software sucks even more.
No one has focused x86 on the ARM market other then Via, and Via hasn’t delivered or has fallen short.
We’ll have to see what AMD’s Bobcat architecture brings to the table. It looks to be a very interesting architecture, and it could be a better competitor then Atom.
x86 has a lot inertia due to years of software development, and that inertia isn’t going to dissipate anytime soon. ARM has inertia too, but I don’t think it can be a real x86 competitor until there are reasonably priced desktop boards on Newegg. The power of getting technology to the masses really can’t be understated.
Won’t matter. The die space just to decode x86 is even itself comparable with many entire ARM cores available. With silicon based transistors starting to play out regarding frequency and size scaling there’s no way x86 will ever be able to touch arm on the power side.
I agree that ARM hardware is rockin, but it will take some time before the big-name laptop manufacturers (Dell, HP, Lenovo, Acer, Asus, Toshiba, Samsung, Sony) collectively encourage each other to get off their asses and do new cool stuff with all this technology. The problem is that there is as yet no *one OS to rule them all* for laptop-form-factor ARM devices (at least excluding Android, which IMHO hardly qualifies as a laptop OS). Maybe Meebo could become that. Maybe Ubuntu. Maybe Microsoft will come up with something (though that’s admittedly a huge maybe). Maybe the traditional laptop form factor is going the way of the dodo anyway. At this point, it’s all baby steps and guesswork.
As always, real progress lags behind the conjuring of the imagination, yet arrives faster than anyone really expects.
I’m sure ARM will conquer the server market just as well as they have conquered the netbook market.
I’m still waiting for my ARM netbook…
Well, actually there are zillions of ARM netbooks/tablets already available.
Most of them is chinese crap, though.
However, some, like Efika Smartbook is definitely worth buying.
I don’t think they are going after Intel, nor AMD. They are primarily protecting their existing market shares. Desktop computers and servers … yeah right…
The A15 is a natural evolution of the Cortex A9 and it will go into similar products, such as smartphones and iPads. Performance requirements will surely get tougher with new exciting online content yet to be developed in the three years to come.
My interpretation of the press release says they have a desire to grow in the telecom and datacom equipment markets where they typically compete against the Power architecture (e.g. Freescale), not Intel. Here virtualization becomes important and it is a trend that can be observed in many embedded multicore processors at the moment.
Some virtualization use-cases can be found here:
http://www.power.org/events/powercon/paris/PAC08_EmbeddedVirtualiza…
ARM don’t make processors themselves, and it seems none of their partners are making chips for anything other than embedded uses… It remains to be seen wether anyone will take up these new server oriented designs or not.
There will be a lot of resistance to them, as initially there will be little or no closed source software for them… No windows, no oracle etc… Even support by mainstream linux distributions is lacking in some areas, and linux is generally just a recompile away.
Personally i would love to migrate to ARM based servers, everything i’m running is open source and power is my biggest expense these days.
Oracle runs on Linux.
The current state of the art for real-world production of ARM chips that you can actuall buy, today, is the Samsung Orion: dual-core Cortex A9 @ 1 GHz.
http://www.intomobile.com/2010/09/07/samsung-orion-1-ghz-45-nanomet…
http://www.blogmera.com/samsung-will-release-the-orion-1-ghz-dual-c…
This chip is designed to handle true high-definition (1080p) video. It is already at desktop-level performance capability.
Gather together say 16 of these chips and you have the basis of a high-performance enterprise server (for use with oracle on linux, say, or as a high-traffic enterprise web server).
Gather together a few thousand of them and you have a supercomputer that can run on batteries.
As for support by Linux …
http://www.debian.org/ports/arm/
https://wiki.ubuntu.com/ARM
http://www.osnews.com/story/22166/ARM_Joins_Linux_Foundation
http://www.linuxfoundation.org/node/4641
Linux runs everywhere.