Appleinsider is reporting that developers who have received Apple’s developer-only Intel Macs are very pleased with the performance of Mac OS X on them. Some even claim that they are faster than high-end dual G5s. Developers also report that Apple’s Rosetta technology, which allows Intel Macs to run PowerPC software, is fast and seamless.
Someone please leak the DVD on torrent and give the link to the nice guys over at http://www.osx86.classicbeta.com !
How well Rosetta works is going to make or break the x86 transition in my opinion, and according to this article “The apps run at about 65 to 70 percent of their normal speed.” and “Rosetta is completely 100 percent seamless and nothing like the Classic environment…”
This bodes VERY well for the transition. If this is article is accurate, I suspect that this transition will be much much easier then the transition to OS X.
Rosetta is VERY important to the transition, but I think many people vastly over-estimate it’s performance. The real key to a sucessful transition is making rosetta as useless as possible (although making sure that it runs seemlessly IF it is needed). That is why apple has given software developers a 1 year headstart on creating universal binaries which will run on Intel and PPC Macs. In a perfect transition developers would have universal binaries ready well before the first Intel based Mac is released and nobody would need Rosetta.
I don’t think Rosetta will be that important at all. I mean, most applications will simply recompile. Some might need a little work, but I believe that 95% of applications still being produced today (ie. not discontinued) will be running on x86 from the first day consumers can buy these machines.
But it really isn’t surprising that this speed is achieved. First, how fast do full machine emmulators work? 50% speed? But Rosetta doesn’t have to lift everything like an emmulator does. For example, one thing an application does is create windows. This is, of course, done through an API in the OS. So, Rosetta doesn’t have to translate that. It calls that object or function in the OS which is compiled for Intel. Anything that is calling APIs should run at 100% or near 100% speed. Only the application code has to be translated which means that 75% speed isn’t so “oh, it must be exaggerated”.
Frankly, this is going to be an easy transition. Most applications compile across different processor architectures – which is why you can have one source for a Linux x86 and Linux PPC application.
That’s an excellent point. A study on MacOS (the original, not OS X) showed that GUI apps spend 90% of their time inside the Toolbox (the Mac’s system toolkit).
— “I don’t think Rosetta will be that important at all. I mean, most applications will simply recompile.”
But those versions are not the versions of software that people have on their software CDs. The key to making the transition smooth is to make sure all the software people already have continues to work, without them having to upgrade or download new versions or patches. People will expect to install their old copy of Photoshop on their new Intel based mac and have it work just as well as on their previous mac.
The ease of recompiling is important, yes, but in the end its the user experience that is key. The transition will only be as seamless as Rosetta is.
after all, it is just returning OS X to its original platform..
well you know what i mean, OpenStep was running on x86 long before it lived on PPC as OS X..
either way, i just wish they were selling the boxes now
Returning OS X to it’s original platform would mean returning to the 680×0.
You mean like this.
http://www.appletalk.com.au/articles/68kpanther/
No, like this
http://www.old-computers.com/museum/computer.asp?st=1&c=277
NeXT OS originated on the 68030. A 68040 version was also made. From there, it moved to the PC and the name changed to NextStep.
haha yes i know.. im just saying that OpenStep has lived longer on x86 then it has on PPC.. (expecially with the internal builds these last few years)
Do you mean the VAX? That is because Mach was developed originally for the VAX out of 4.2BSD, but so as to be portable and found its way to a lot of different VAX, the IBM RT/PC, and the Sun 3.
Yes, I suppose we’re being rather silly with this discussion.
Don’t forget that FreeBSD ran on x86 long before PPC too.
FreeBSD + OpenStep GUI = MacOS X Intel coming full circle.
When Apple spoke all these years about Apple PPC hardware being superior to x86, they were blowing smoke up people’s ass?
I’m really interested to see how much they’re going to charge for x86 Macs. People have argued for years whether or not Macs are overpriced. But since they will soon be running on PC hardware, I guess we’re gonna find out
When Apple spoke all these years about Apple PPC hardware being superior to x86, they were blowing smoke up people’s ass?
They’re stronger on different tasks, I expect benchmarks will show some things up, some down.
That said the Mac P4 will have lower memory latency, that 2MB cache will make a big difference as well.
The G5 hasn’t been modified in any major way for a long time now so it’ll be interesting to see how the new ones (with 1 core disabled) will go against the P4 machines.
Just cause they are using intel procs, doesn’t mean they are PC hardware. They will be like any other mac ever, custom hardware.
And going to intel may very well not make them cheaper. Apple said the 970 cost less then an intel chip. So they very well may go up in price.
you never have looked in a modern mac have you?
the only custom piece in the intel macs will be the motherboard configuration.
That right, And thats what I said. Apple isn’t going to be building these intel makes using ATX mobos. You still won’t be able to compare things one to one. Closer in many regards, but still not the same.
Also the PSU is very custom in them. And the cooling system, And the case of course doesn’t follow much of any Standard case designs.
The original post implied thinking that apple will be using standard mobos and such.
The cooling system isn’t particularly custom. The pump, for example, is just a slightly modified Laing DDC (Swiftech MCP350), the same pump used by tons of PC modders.
…the same pump used by tons of PC modders.
But, I rather think you just defeated your own argument. The pump might be a stock component, but PC modders are a niche market at best, who by definition build their own machines or make modifications to purchased ones. Apple is the only major manufacturer that would dare push the envelope and make such a component standard on their machines…next time Dell has a water-cooled system, let me know.
Alienware and other manufacturers that cater to people who want extreme performance has that “pushing the envelope” covered.
Ontopic:
The benchmark would certainly be interesting between the G5 Mac and the Intel Mac, and also between the Intel Mac and an ordinary XP box.
Apple isn’t Dell. It’s a boutique computer shop selling boutique computers. Lot’s of such shops in the PC industry sell pre-modded computers with watercooling.
Besides, my argument was that Apple’s components were stock, not custom. Even if they use a watercooling system, it’s pretty much a stock watercooling system. It’s not like they use custom Apple stuff.
PS> Watercooling your computer isn’t “pushing the envelope”. Watercooling is dangerous and often noisy (though, Apple did a good job using a very quiet pump and a bearable fan and radiator). It’s a last resort used when your processor is overclocked and runs too hot for a traditional HSF, or better yet, a big passive heatsink and a ducted fan (what Dell machines use — they’re *really* quiet).
Macs have very little custom hardware in them. It’s the same generic RAM, same ATI and NVIDIA graphics cards, same Maxtor harddrives, etc. The only thing custom in there has been the motherboard, case and power supply. With the Mactel, Apple will be using an Intel chipset, so its machines will be as stock as Dell’s — slightly modified motherboard and power supply, custom case.
or maybe they re-examined the products and realized new benefits.
the architecture is immaterial not only for most users but an aweful lot of developers. so anyone that is gonna say “x86 sucks” can go pound sand because honestly, it doesnt even effect you.
Mr “jobs” take the Intel Macs and make a bullshit… years and years, saying “mac are best…” “power pc are best”.. baa FUD FUD!
the same history of windows… xp more secure that’s, Me, …
ALL Lies!
These are the developer-only Macs… the “regular” Intel Macs will probably be a lot slower.
Nonsense. If anything they’ll be faster. A year from now the consumer models will have performance similar to these developer machines, and the pro-level models will be faster, because they’ll have the latest and greatest.
While it does sound like they’re putting P4EEs in those dev machines, they’re not dual core! I doubt we’ll even see a Intel based PowerMac with a non-dual core chip.
Where’s your justification for this statement? I would actually expect them to be faster, since chips will only get faster by the time they come out.
They are shipping the highest end P4’s aren’t they? 2MB cache @ 3.6 sounds like the current P4EE. Were G5’s really that expensive (I doubt it)?
The new Prescotts have 2megs cache. Here’s the link: http://www.intel.com/products/processor/pentium4/index.htm
why would consumer macs be any slower it’s not like those system specs would be breaking the bank, especially compared to PPC costs.
There is something very fishy about those claims. Those numbers simply don’t add up. I believe it when I see it. Besides, who cares about any Windows boot times anymore when everyone is running Linux or Mac OS anyways.
This is boring.. Apple is good, Intel is bad. Apple is good, IBM is bad. Who cares?! Do they have a product that can compete in any market other than wacky marketing folk?
I’m curious how they get 60-70% efficiency running entirely different code like that. Course, transmeta would have had to have the same sort of efficiency to make their processors viable (and they were viable, not good, just viable).
They are not emulating, they are translating the binary and saving the translation, like FX32! used to do on Windows NT for Alpha. This makes the application run much faster after the initial translation.
— “I’m curious how they get 60-70% efficiency running entirely different code like that.”
Like saterdaies said above, its probably because only the application code has to be translated. Most of what a modern GUI app does is actually calls to resources provided by the OS (window/disk/memory handling, etc), which is completely native in this case. Im sure its FAR faster then having to translate the whole OS along with the app ruinning in it, like with Virtual PC and similar products.
why would consumer macs be any slower it’s not like those system specs would be breaking the bank, especially compared to PPC costs.
It’s obviously not a normal x86 computer (windows XP blazing fast?).
Either:
1. It’s going to be really – REALLY – expensive for consumers.
2. Apple is trying to create hype, and giving developers extra incentive to port apps.
Apple is the only major manufacturer that would dare push the envelope and make such a component standard on their machines…next time Dell has a water-cooled system, let me know.
There was a company called Kryotech that was shipping liquid cooled Athlon 800 systems clocked to over 1ghz. some years back.
Apple’s not really pushing the envelope here as its been done for years in one capacity or another.
In dec 2000 I saw a 800 MHz Intel overclocked to 1 GHz, cooled with Peltier cells some degrees below zero (more high tech than water cooling), and I dont’t think it was the first and the only! So why so many Apple users claims Apple’s water cooling, 2-3 years later, to be so revolutionary?
It’s not even water cooling really, no pumps etc. just heatpipes.
Jees heatpipes are barely better than a chunk of copper.
And peltier is a bit more extreme than water.
And most x86 chips IME will get a 33% overclock on air.
http://www.apple.com/powermac/design.html
there certainly is some liquid cooling involved. it’s not a traditional setup though.
“According to sources, web browsing in general is much faster under Mac OS X for Intel than it is under the shipping version of Mac OS X for PowerPC. Web pages snap to the screen, the same way they do in Internet Explorer running on a new Pentium system, they say.”
Hey guys!! It runs just as fast as windows now!
Ohhhh? I guess it’s time for those PPC fanatics to eat yet more crow, as they predicted performance would see a hit when Apple users moved to the new intel boxes.
I mean seriously people, Apple is a company, with money, programmers and engineers, I think they can handle the simple task of managing a cross platform OS.
what does Quake1,2,3 and UT-based games all have in common? They all run much faster on x86 than PPC. I’ve seen it over the years. Benchmark after benchmark. Despite the same video cards. Good riddance to PPC crap.
actually the benchmarks for quake 3 are faster on macs than pcs, because one of the core devs took a personal interest in optimising the ppc build as much as he could.
I’m quite sure Apple will tweak the Intel Macs which is adequate feasible with sufficient watercooling.Besides there will be dual-cores inside.
They should be pissed then, for all those years that Jobs & Co spent “proving” how much better PPC was.
Of course now Intel is faster..and guess what, had Apple moved to AMD, _that_ would be the fastest.
Oh, the joy of PR
apple never claimed intel was faster than the g5 right now (and if you look at the benchmarks, they’d be crazy to. the g5 kicks its ass in a lot of areas).
this transition really boils down to a few points: 1) laptops sell better than desktops, and the g4 is NO MATCH for the centrino, 2) top of the line g5s require water cooling to maintain a reasonable heat / noise level 3) the 3ghz mark has still not be reached when it was predicted to over a year ago.
apple’s doing this transition now because of the future. apple’s doing “ok” for now, but it’s down the road that all those factors will really matter.
No, they should be pissed on for WASTING all of that money on powerpc R&D (Hardware + software).
As to fat binaries: nah, I’m not going to be interested in dling something thats 20M instead of 10M(or less as x86 binary code IS smaller than ppc) just to include ppc support. Fat binaries are nifty, but I’d prefer separate downloads just like they did for BeOS.
OpenStep: bzzzzt! Sun paid NeXT to port the GUI portion(+ dev libs) to solaris as a potential counter to M$’ Cairo. Sun paid more attention to Java as it became apparent that MS was, as usual, too busy “innovating” to release anything innovative. NeXT was also doing ports to x86(CYA), and other architectures as well.
I really wonder how much they optimized OpenStep code for the ppc as I would guess that in the porting they focused entirely on x86 originally after Sun lost interest… Also gcc gets WAY more optimization work for x86 than it does for ANY other processor. I’d suspect that it’s a combination of factors which come down to the ubiquity of the x86 kludgitecture sucking down all the optimization research/time/$s.
Emulation: Given the significant reduction in GPRs on x86, I’d suspect that rosetta is merely a more advance binary recompiler. A Q&D test would be to benchmark first few runs, then re-benchmark the app again after using it for a length of time.
Lastly: I still think that AMD would have been a better choice. They have slightly less kludgier designs than Intel and are managing to continue kicking Intel’s a– performance wise. The only drawback is that their support chipsets suck, but Apple SHOULD have been able to handle that on their own unless alot of hw engineers are going to be receiving nice pink slips soon…
It is no surprise that a cheap boutique version of IBM’s mainframe chip never was a performance champion.
From the inefficient bus to poorly designed I/O chip to the processor itself, there wasn’t much going for the G5 except 64-bits and backward compatibility.
Perhaps the chip would have better — this is what IBM says — if Apple had not forced Altivec onto the chip. The native POWER4/4+ floating point was pretty good… and not nearly as specialized and hard to use as Altivec.
I wonder how much of a sense of betrayal Apple cultists will feel when the news that the PowerPC was a dog is more commonplace. There will be some emotional issues, no doubt. Maybe Apple can spin the whole thing and blame it on IBM and give some incentives to people to upgrade to the new and *much faster* Intel systems.
Hmmmm. I hear “employee pricing” is selling a lot of cars….
I think the PPC and most Mac users had an elitist attitude…just wait and see the benchmark numbers on the Intel platform that are going to blow away anything that OS X was producing on the PPC platform. I hope MS ports DirectX to Mac so we can have some great gameage as well besides all the choice software that is already available for the Mac. Then it would truly become a wholesome software…a bit of everything!
I’d much rather see more use of opengl in games it’s fast, portable(any os can use opengl) and you wouldn’t have to use some botched port of a microsoft “standard” that could be pulled at any time. We should be seeing more use of OpenAL(which creative, a company I rarely praise, is pushing) soon as MS is pulling directsound from the next version of direct x (released with longhorn).
Don’t be fooled by microsoft porting word to intel macs they’ll be watching apple more closely now. Apple may be no threat now but XP is getting very old now and longhorn beta 1 has been pushed back until the end of this month. MS is taking a huge gamble with longhorn It’s much safer to have constantly updating releases that slowly improve than huge leaps years apart that open whole new exploits and risks of massive failure.
UNPOSSIBLE!
Here’s a couple of points to every saying “this just shows PPC sucks, Jobs is a con blah blah”:
1) Altivec, according to some developers who were speaking just after the change was announced, is a better vector unit than MMX,SSE,SSE2. This is only in very minor ways however, so it won’t significantly alter performance.
2) PPC chips were, in general, a small bit faster than Intel chips at the same clockspeed. However the current G5 runs at 2.5GHz whereas the current Intel (at least in the dev boxes) runs at 3.6GHz. Even with the troubles surrounding the Netburst architecture, that’s a lot of Hertz to catch up with.
So, frankly, it’s not surprising that a 3.6GHz P4 with a 2048KB cache is a bit faster than a 2.5GHz G5 with a 512KB cache.
In any event, CPU performance isn’t the real bottle-neck today, particularly for the Mac. It’s memory capacity and speed. Further, the most noticeable speed increase a user can achieve comes from replacing the graphics card rather than the CPU. This is particularly the case since Apple released the CoreGraphics API that lets developers harness the power of the GPU for their graphics work.
Altivec, according to some developers who were speaking just after the change was announced, is a better vector unit than MMX,SSE,SSE2. This is only in very minor ways however, so it won’t significantly alter performance.
Define “minor”. Alitvec can theoretically sustain twice the single-precision throughput of any existing SSE unit.
PPC chips were, in general, a small bit faster than Intel chips at the same clockspeed. However the current G5 runs at 2.5GHz whereas the current Intel (at least in the dev boxes) runs at 3.6GHz.
This is, surprisingly enough, somewhat true. Extrapolating from the SPEC scores, the G5 at 2.7GHz is comparable to a Pentium 4 at 3GHz. The G5 has pretty atrocious integer performance apparently. Meanwhile, it’s floating-point performance is comparable to a 3.5GHz P4, which puts it closer to an Opteron of the same clockspeed than a P4 of the same clockspeed.
1) Altivec, according to some developers who were speaking just after the change was announced, is a better vector unit than MMX,SSE,SSE2. This is only in very minor ways however, so it won’t significantly alter performance.
intel devs have struggled to get some of the things apple has done in altivec running at even 50% of the speed of the equiv code apple has. this isn’t entirely true.
The comment “Mac OS X for Intel takes ‘as little as 10 seconds’ to boot to the Desktop from when the Apple logo first displays on screen” seems to imply that booting OS X on a G5 takes some time. On my system (Dual 2.3 G5, 6GB RAM) OS X (10.4) boots in 8 seconds from Apple logo to Desktop. Although I woud never consider boot time as a reasonable benchmark.
boot time is a horrible benchmark. it’s entirely ram speed, and disk speed based.
i’m sure my dual 2.5’s 10k rpm hd lets me boot faster than most pcs.
I’m sorry but this is ridiculous.
If someone tells me that Mac OS X runs faster on an Intel Pentium 4 3.6GHz than on a otherwise comparable DUAL 2.xGHz G5, I might believe that. I think it is highly unrealistic, but if several people who were able to compare these systems tell me it’s true, then maybe I’ll believe it.
But the developer machines are not “otherwise comparable”, they use Intel onboard graphics. Mac OS X makes heavy use of the GPU and I certainly won’t believe that a 3.6GHZ Pentium with onboard graphics performs better than a dual 2.xGHZ G5 with an ATI 9600 or an even faster graphics card.
it’s no mystery that a 6800 in your powermac will whip a on board graphics card at GRAPHICS. the topic however is entirely about the processor.
No, it’s not only about the processor, it’s about overall speed and overall speed has a lot to do with the graphics card on Mac OS X.
Before these OSX on intel machines get in the hands of an Anandtech or the like and are properly benchmarked. Then, it will be easy to tell whether or not Apple marketing hype around the performance of the PPC was just that. I know where I’m putting my money…
“Before these OSX on intel machines get in the hands of an Anandtech or the like and are properly benchmarked. Then, it will be easy to tell whether or not Apple marketing hype around the performance of the PPC was just that. I know where I’m putting my money…”
As a condition for the lease, Apple won’t let anyone release benchmarks for the developer systems. So we will only see proper benchmarks when Apple start selling the Intel Macs.
Shoot, with that kind of spec, it better be fast.
Wow, how many websites and commentaries claimed that dual G5’s were faster than any x86 machine? What about those tests that use very obscure Photoshop filters? I guess they were all untrue? Perhaps OSX magically made the x86 faster? Or perhaps the Mac fans will proclaim anything they use at the moment to be the best.
Can the Mac folks finally give SPEC some credit? Seeing as how SPEC confirms pretty much what everyone is observing about the G5 versus the Opteron and P4?
I wonder if the boot time may have something to with possibly dumping openfirmware? Anyone know if the x86-Mac dev systems still have openfirmware? Or are they using the NIH driven Intel version of openfirmware? (I’ve forgotten what they call it, but from descriptions it sounds like YAC of openfirmware as a BIOS replacement…)
Fuck the proprietary Intel Pentio system.
Guys, please think about this reasonably. Web browsing is mostly integer ops, so a 3.6Ghz P4 compaired to a 2.7GHz G5(who don’t have the best integer performance), will be faster or “teh snappy”.
I’m curious to see how anything floating-point or vector based will behave, but otherwise I would expect general OSX application usage to be faster.
My overall guess is that most OSX apps will run faster on a P4, whereas aps that use optimized vector/floating-point ops will be a bit slower. They will surely see a huge difference in threaded apps as well since these are single P4 systems.
this is just a matter of optimization.
Especially Adobe have to learn to make his Apps faster on the Intel Platform. Photoshop CS was a incridible slow on Intel.
Intel or AMD aren’t really slower, or G5’s are really faster, they are just handeling some things different.
Think different
…and if the CPU is to slow for some floating point operations, just use the GPU.
Define “minor”. Alitvec can theoretically sustain twice the single-precision throughput of any existing SSE unit.
Keywords: theoretically and single-precision. Theoretically matters much less than average. The Itanium is theoretically super-fast. But I’ve used the lame dog, and it was slowwwww.
Single precision is certainly useful for graphics, aka photoshop, but double precision is necessary for many things, including scientific apps.
What really matters is how hard optimization is, and what performance is like without it. Becasue the best vector unit ever is useless unless someone writes code that takes advantage of it.
1) “Theoretically” matters for a lot of apps. The Itanium2 is theoretically an FP monster, and lo and behold, it sits really high on the specfp rankings.
2) Single-precision is pretty much the only thing the Mac target market is interested in. Scientific computing is a small niche in the overall Mac market, while media processing is a much bigger part. Nearly all media processing apps can get by with single-precision FP.
3) Altivec aside, even with regards to double-precision, the G5 is fast. It has two symmetric FPUs, each of which can do a double-precision FMAC per cycle. SSE can’t do FMAC, so when FMACs are used (and they’re used often — why do you think the VaTech cluster scores so high?), the G5 still has twice the floating point throughput of any SSE-enabled processor.
Probably Apple has access to Intel’s C compiler technology and is shipping it with their Mactel development kit, no ? If so, are they using 64-bit mode or will OS X on Intel run in 32-bit mode ?
By 2007, you would think the OS would have to be 64-bit compiled…hopefully with backwards 32-bit compatibility ala current apps in OSX and XP x64 (but sadly enough, not 64-bit Ubuntu w/o chroot). Have to admit, despite only booting to XP twice, and OSX a half a dozen times this year, I admire what both companies have done mixing the 32- and 64-bit worlds.
I wonder if these P4 systems have hyperthreading enabled. I know that when I switched from a 2700xp to a 2.4 p4 windows actually became snapier. slower processor, but it felt faster.
OSX is a prime cantidate for hyperthreading. micro architecture just screams parallel processing, SMP or hyperthreading, they’ll both add to the snappyness of osx.
It’s exciting to bring in duel core possibliities as well. OSX does a fantastic job of SMP marsheling. Much better then windows XP. even the dumb apps that I wrote, as long as they were threaded ran much faster on a duel g4, then on a single.
OSX made it easier for people who know less to do more, including eak out performance.
I’m a little concerned about the 64 bit ness of the P4, I believe Intel has a x64 chip out, I wonder if these p4s in question are they. I would truely hope that apple wouldn’t take a step back to 32 bit after they promoted 64 bit, and braught me along with them.
I was weeks away from buying a g5, and then heard this news, now I’m typing on a windows box, for another year or so, until these intel machines are ready.
Mac Maniacs always say PPC is superior than x86.
Now what?
Shame on you.
no, they didn’t lie. PPC is better than x86; it just so happens that the people/companies doing PPC can’t provide the processors needed by apple for the *mac and *book lines.
RISC is better than CISC….though i do understand that the differences have become less over time; PPC (RISC) is better than x86 (CISC) and always will be. Apple never lied, it just had to make a change for provide the best computers available.
No boot time is a very good benchmark.
And Boot time depends not on HDD and Memory alone, but on the OS kernel and the CPU used. I already has proved that on my small lap. Motherboards also have some effect on the preOS load time (BIOS Phase).
Everybody will love a quicker boot system, ask windows 2000 users who upgraded to windows XP.