AnandTech has written a long and in-depth review of the new Intel iMac (16 pages). They conclude: “I like the iMac, I like it a lot. It’s a computer that can look and work as well in a kitchen as it can in an office, and that’s one thing that Apple has done very right with this platform. It took me this long to look at it, but I think it could quite possibly be Apple’s strongest offering as it accomplishes exactly what they are trying to do – which is build lifestyle computers.”
..is to the final page. Slightly confusing.
This is the first one:
http://www.anandtech.com/mac/showdoc.aspx?i=2685&p=1
he says he wouldn’t be able to use it simply because it’s monitor resolution isn’t high enough. i know 1680 x 1050 isn’t the highest resolution in the world, but come on, what can’t you do on it??
Actually, he said that the 1440×900 17″ model was too low of a resolution for his use, which would force him to upgrade to the 20″ model instead. Personally, I think the 1440×900 would be enough for most people, but I’m sure he does a lot of multitasking.
I found this article a very good read, very informative and the tests, maybe not proper benchmarks but they do indicate present PPC vs x86 capabilities within Apples own product range, I found the RAM usage by Word 2004 astonishing I dont think i could have imagined that sort of usage, okay rosetta is emulating a different Arch, but still it was a real eye opener
The article said Intel iMacs used slightly more memory. I wonder why this is, it seems to me that x86 instructions should be fewer… And I thought all the primitive units were the same size on both (x86 and ppc). I wish the article would provide some conjecture on why the Intel macs use more RAM.
Tip: Rosetta
So Apple is building their programs with Rosetta? Not just native code?
martin.k’s post is pure nonsense. Most likely the x86 instruction set features more long complex instruction codes than the Power, or else the compiler isn’t doing as good a job for x86. Given its prevalence though, it’s more likely the instruction set.
Rosetta is an emulator, you can’t “build” any apps with it, but you can run apps built for Power on a copy of Rosetta running on Intel.
martin.k’s post is pure nonsense. Most likely the x86 instruction set features more long complex instruction codes than the Power, or else the compiler isn’t doing as good a job for x86. Given its prevalence though, it’s more likely the instruction set.
Actually, the x86 ISA is quite a bit more compact than that of any RISC. Since it has variable length instructions, the average instruction size is around 3.2 bytes, versus 4 bytes for a RISC. Also, since instructions can take memory parameters, the number of explicit load/stores are significantly reduced. It is common for GCC-compiled x86 code to be almost half the size of GCC-compiled PowerPC code.
x86 code IS smaller than RISC code (at the expense of more complicated hardware), but the code size for a given program is usually very small compared to the amount of memory a program will use.
Most likely the extra memory is caused by the way GCC is compiling it, added to the fact that the code may not be as optimized for x86 as it is for PPC. GCC is probably more likely to use memory on a Core Duo than a G5 given the memory access times it has. Then again, I may be giving GCC to much credit. Perhaps it is soley due to fast ports of PPC code.
Apple provided the emulator Rosetta as part of their promise to help migrate customers to x86. Apple did already state they will release updates to their current software offerings as “universal” binary. Which to me would mean that when buying software such as FCP you’ll find an installer for PPC and also one for x86. If though Apple releases an “all in one” binary that runs in emulation this is not beneficial to consumers as it’s always better to run an application on it’s native code. Eventually though Apple customers should expect to see Apple stopping support for PPC either Q4 2007 or Q1 2008.
Everything included in OS X Tiger for Intel is native code without Rosetta. Mabe the code is not yet very optimized, it would explain why it take more space.
Rosetta is only required if you run a PPC software.
iLife ’06 and iWork ’06 are Intel native.
The article said Intel iMacs used slightly more memory. I wonder why this is, it seems to me that x86 instructions should be fewer…
Yes, x86 code is more compact than RISC code, so it can’t be that. Since the applications are compiled from the same source and are all 32-bit their data memory requirements should be about equal. x86 passes function arguments on the stack rather than in registers, so it probably needs a few hundred bytes more there, but nowhere near a megabyte.
So why does the x86 version require more memory than? My guess would be that it’s the dynamic memory allocator in the C/C++ library. Perhaps on x86 it just keeps a bigger reserve of allocated but not really used pages.
That would require a few more pages to be spilled out the first time you fill up your real memory, but not really affect the performance afterwards.
It would be interesting to see the memory requirements with a few apps loaded up so that all the memory is being used.
The difference in memory could also be that Apple’s simply using different compiler flags on x86 than on PPC (like, -O3 vs -Os).
That’s an interesting result in which the Core Duo came out behind the G5. In Anand’s previous results, the Opteron came out substantially ahead of the G5, a fact that is mirroed by my own tests with TSCP that show the G5 being about 75% as fast. Moreover, the Core Duo is supposed to have an incredible branch predictor (its one of the key things that makes it faster than the PIII). It would be interesting to see precisely what is holding the Core Duo back in this case, by taking a look at the performance counters on the chip. Does anyone know if Shark runs on Intel Macs?
Otherwise, the benchmarks look quite solid, although its kind of entertaining when Anand calls a 30% performance improvement only “decent”, given that its just about the delta between the slowest and fastest iMac G5 available, and about the same as the overall improvement in the dual-G5 PowerMacs since 2003. It’s really quite amazing that Intel managed to squeeze a dual-core ~2GHz chip with 2MB of cache into a 35W power envelope and a 90mm^2 die.
Edited 2006-01-31 21:57
I usually find articles at Anandtech to be generally very good, and this one doesn’t change that. Definitely the best article about the iMac G5 vs. the iMac Duo Core. I found it pretty interesting to see, that at least in Quicktime, that the G5 did do better than the Duo Core, when it was only using one core, like the G5 has. The article seemed fair and I didn’t see any biases that have seen apparent on some other articles about the G5 vs. Duo Core. Great article, Anandtech.
Hmm… title should read Article, not Artcile
Edited 2006-01-31 23:25
On the page titled Intel Macs use More Memory, why does the Intel iMac register as having 509 mb when it has a 512mb chip installed?
i think that might be because of the ROM that’s loaded into RAM. i know all the “new world” macs do this, i’m assuming the intel ones do to.
Thats not the case because #1 I thought OS X really didnt need the ROM and #2 the G5 registers 512mb. Is it being used by EFI??
…seems to like it very much.
http://www.connectedhomemag.com/HomeOffice/Articles/Index.cfm?Artic…
But I’ll take something with much more punch and upgradability please.
The G5 chip is still a outstanding performer and the Quad is a monster.
http://www.geekpatrol.ca/article/101/geekbench-comparison
Wow! It looks like GeekBench will be to the G5 what ByteMark was to the G4. A highly simplistic and artificial benchmark that PowerPC folks point to when everything else shows x86 winning…
Edited 2006-02-01 02:31
Its not surprising that the G5 mono core is faster on floating point test, the G5 is really fast with those kind of operations, and being faster than a CoreDuo on a monothread test is not surprising. I would expect the futur Woodcrest as a better competitor to the G5 in this area.
The G5 can beat high x86 processors as the last Opteron in many floating point tests, so thats still a strong processor in this area.
Otherwise i dont find the Anandtech test very intersting because we already saw other people testing the Core Duo Imac on iLife, itunes or quicktime. And they seem to be so amazed in things rather obvious, like that Rosetta uses more memory, even if they know that it caches translated codes on memory. The penality of memory brings faster translatted application, a inevitable tradeoff.
I mean the article does not bring anything new, we already know the results of those tests and Anandtech does not do it better than others. I would agree that the test of geekpatrol brings more exotism and diversity in the tesing method between G5 and CoreDuo. Its more interesting to look at…..
Shark is an Universal Binary so it runs on intel macs, actually all applications built in with the Developer tools are already Universal.
I thought Anand was a bit too negative about Rosetta.
It gets about a third of the performance even on media/3D stuff, which was always going to be the most difficult. The word document opening test gets about 60% of the G5’s performance, and I’d expect other integer tests to get similar results, shame Anand didn’t do more there.
These numbers are very impressive for a processor emulation, so Rosetta might just be a crutch, but it’s a gold-plated one.
The crash on the HTML test is worrying though. Perhaps it’s just exposing a subtle bug in MS’s code that might have shown up on a new PPC implementation too, but of course that doesn’t make a difference to a user.
On the page titled Intel Macs use More Memory, why does the Intel iMac register as having 509 mb when it has a 512mb chip installed?
I don’t know, but here’s a theory. The missing 3 MB could be part of Apple’s solution to tying OSX to their own hardware.
This could work as follows. Part of the OS code on the hard disk is encrypted and the TPM module contains the key to decrypt it. During boot-up, the TPM writes the decrypted code into the reserved 3 MB. The chipset ensures that those 3 MB can only be read as instruction memory, not data memory, so that it can only be executed, not copied, even by kernel code.
Properly implemented, such a scheme could be safe from software hacking (short of breaking the encryption algorithm). So hackers would have to attack the hardware, e.g. by listening on the bus while the decrypted code is being written, which would make things a lot more difficult.
Ummm no, put away the crazy theories. Welcome to x86land. A typical limitation of x86 hardware is that you won’t have use of all your RAM no matter what OS you use. Here are two of my machines, one with 512MB of RAM and one with a Gig under NetBSD.
total memory = 511 MB
avail memory = 492 MB
total memory = 1022 MB
avail memory = 993 MB
Thats usually because of onboard video. I have many x86 systems that see the full ammount of ram.
“Thats usually because of onboard video. I have many x86 systems that see the full ammount of ram.”
No, you don’t. Aside from onboard video, which neither of the machines I listed have, this is a limitation of the memory addressing done by the x86 bios and there isn’t any way around it.
What are you talking about? The only memory addressing issue is the fact that the BIOS is mapped from 640KB to 1MB, so that ~300KB is unusable. Everything else is detected by the OS itself, and is useable on any x86 machine.
yeah thats for sure when you have BIOS, but Intel Mac has EFI….
The difference in memory could also be that Apple’s simply using different compiler flags on x86 than on PPC (like, -O3 vs -Os).
Even then the x86 code would probably still be smaller than the PPC code. And executable code makes up only one (often small) part of applications’ memory requirements anyway.
No. The actual code (executable) would be smaller. But the amount of memory used by a program would almost certainly be less when compiled with -Os. If it wasn’t then that flag isn’t working correctly. The instructions for a program make up a VERY small part of the total amount of memory used.
But the amount of memory used by a program would almost certainly be less when compiled with -Os. If it wasn’t then that flag isn’t working correctly.
Where did you get that idea from? -Os regards code size only. From the gcc manual:
“-Os
Optimize for size. -Os enables all -O2 optimizations that do not typically increase code size. It also performs further optimizations designed to reduce code size.”
Welcome to x86land. A typical limitation of x86 hardware is that you won’t have use of all your RAM no matter what OS you use.
That’s not much of an explanation. Where is the memory going then?
Edited 2006-02-01 13:27
i think that might be because of the ROM that’s loaded into RAM. i know all the “new world” macs do this, i’m assuming the intel ones do to.
Yet the G5 reported 512M, while the Intel reported only 509M.
We’re Sorry – the Security Hole is Fixed Only in the Next Version.
http://www.securityfocus.com/advisories/6004
Wow, Apple must have really underestimated this one!