I recently bought one of the new dual core PowerMacs. Having used the machine for a couple of weeks, I thought I would share some of my observations and feelings about it. First, let me get my biases out in the open. I have, for about four years, very happily used Linux on my desktop. Doing so has made me very comfortable with the UNIX environment in general, and with GNOME specifically. During that time, I have used OS X machines on a regular basis, so I am quite comfortable in that environment as well. Since I switched to Linux, I have not used Windows for anything more than the occasional bit of software testing or lab work, and generally feel quite uncomfortable with it. Thus, this article is very much written from the perspective of someone who finds OS X and Linux pleasing on principle. I implore the reader to make his own value judgments based on my comments. The Machines I have two computers at the moment, a dual-core Athlon X2, which I will refer to as the “X2” from here on, and the dual-core PowerMac G5, which I will refer to as the “G5” from here on. The specifications of the two machines are as follows: In short, the two machines are very comparable in specifications. with the main differences being that the G5 has a DVD burner, while the X2 has the much faster “GT” version of the GeForce 6600 graphics card. It would seem that the G5 also has a much beefier power supply, but in reality, power supply ratings are about as reliable an indicator of performance as megahertz ratings, so no comparison can be made between the two without further testing. From the outside, the PowerMac G5 is a beautiful machine. It is entirely aluminum, brushed to a gorgeous silver-white color. The perforated front contrasts nicely with the solid sides, and the white pulsing power light is a sophisticated accent. The machine is as solid, and heavy, as a rock. The side panels and handles are made of what appear to be 1/8″ aluminum plate. Anybody who knows aluminum knows that 1/8″ plate is serious stuff. It can easily support the weight of a man, and getting it to curve in the way the G5’s handles are curved requires a two-ton press. All in all, the G5 looks more like an expensive piece of audio equipment than a computer. Unfortunately, the G5 doesn’t just like to be seen, but heard as well. The G5 is very loud compared to the X2. I somewhat expected the G5 to be louder, since I have optimized the X2 very intensively for silence, but it was louder than I expected. From an acoustic point of view, the G5’s noise issue are the result of a few critical failures in case design. The primary problem is the cooling system. It is very elaborate, with nine fans, a huge dense heatsink, and numerous air guides. It is entirely too much for a processor that supposedly has a thermal design power of 100W, and its overkill factor makes for a lot of unwanted noise. In comparison, the X2’s cooling system is built using a few simple principles of silent computer design. First, there are only three fans, and they are large, slow ones that move a lot of air at low speed. Noise is a 6th order function of air velocity, so making a fan twice as big but half as fast can result in a huge decrease in noise. Second, the X2 uses a large, but free-flowing heatsink. Turbulent airflow through a heatsink creates a lot of noise, and a constricted heatsink like the G5’s is noisier than a freer-flowing one like the X2’s. A secondary problem is the case. The G5’s fancy aluminum case is an acoustic disaster. Plate aluminum is a very rigid, and thus very “loud” material. To compound the problem, there are few noise treatments on the inside of the case. Parts are mounted to the aluminum frame without isolation devices, ensuring that every slight vibration will be magnified into an audible annoyance. With nine fans and a noisy hard drive, there is plenty of vibration for the aluminum case to amplify. The X2’s P180 case, in comparison, is designed for silence. It uses a composite construction of plastic sheets sandwiched between aluminum ones. These panels dampen interior case noises instead of amplifying them. Inside the case, everything that can vibrate is mounted using soft silicone pads or grommets. These ensure that vibrating components are isolated from rigid structures, suppressing vibration noises. A tertiary problem is the hard drive. It is a cheap Western Digital unit that is extremely loud. It can be heard while merely spinning, and when seeking it is distractingly audible. In comparison, the Seagate in the X2 is silent while spinning, and its seeks can barely be heard without listening carefully. In all, the G5 is barely tolerable from a noise perspective. Many people would find it adequate, even quiet, but gamers with 100 CFM Tornado fans don’t count, and I point to the popularity of silent computing websites as evidence of the fact that more and more computer users are becoming noise-conscious. One doesn’t even have to build their own computer to find a quieter machine than the G5. There are three Dell Dimension Pentium 4 machines in our apartment, and every single one is quieter by a large margin than the PowerMac. Moving on to the inside of the machine, I would like to point out a few things about parts quality. There is an idea floating around that Apple somehow uses particularly high quality components. I find that idea to be a inaccurate. It is true that Apple uses a very high quality case, heatsink, and motherboard PCB, but that is about it. The hard drive in my PowerMac is, as I mentioned, a cheap Western Digital unit. If you type “250GB SATA” into Froogle, it is the unit that comes up at the very bottom of the price list. The fans are run-of-the-mill Delta pieces with audible bearing noise. The RAM is mix of generic and Crucial. I’ll leave it to the reader to guess which brand came with the machine, and which brand I added. The keyboard is light, mushy, and its keys do not spring back with conviction. The mouse is lightweight and has a very flimsy-feeling scroll nub. The X2’s components are superior in almost every regard, save perhaps for the motherboard and heatsink. The case is superior in the areas that matter, notably expandability and noise. It uses name-brand low-latency memory, a quiet Seagate hard drive, and a very efficient and quiet power supply from a company well-known, to silent computing folks, for the quality of their products. It uses fans that are $10 apiece, but worth every penny for their high-quality, silent bearings. It uses a keyboard heavy enough to wield in self-defense, with real German-engineered key switches that spring back as soon as pressure is released. It is, in short, a very high quality unit. I have no problem with paying a premium for a premium product. I love my iPod, and think they are worth every penny, because their hardware really is superior to what is available in competing players. However, I cannot say the same for the PowerMac. Aside from the aforementioned case and cooling system, there is nothing in there that you wouldn’t find in a Dell. Ultimately, it is not the case that you spend hours a day staring at, but rather what’s on the screen. In this regard, the PowerMac shows one of its best sides. From the stark grey boot logo to the network preferences panel, OS X presents a beautiful environment throughout. It cannot be stressed enough that the beauty of the environment is a function of many factors, of which the Aqua theme is actually only a small part. The most noticeable thing is the use of color. While OS X sticks to a palette of blues and grays, it occasionally uses splashes of reds, purples, and yellows to liven things up. Keeping an exciting interface like Aqua usable on a day-to-day basis is a challenge, but Apple accomplishes it by doing an excellent job of laying out interface elements. Even with the big icons and colorful widgets, nothing ever feels cramped or cluttered. Nothing ever seems like it clashes or is otherwise out of place. The attention to detail in this regard is astounding. For example, almost nowhere in the interface is there a small icon. Small icons are generally quite ugly, since they are hard to make out as anything but formless blotches of color. The only place they appear is in some window title bars. Interestingly, Apple seems to have removed them in the newer Platinum theme. The menus are just incredible. OS X has the best menus, bar none, of any OS I have ever used. Apple seems to realize how important menus are in traditional computer interfaces, in a way few others do. OS X’s menus are big, uncluttered, and above all, clear and easy to read. They use big fonts, big arrows, big symbols denoting shortcut commands, and ample spacing between all the elements. Of course, OS X has the top menu bar which I love so dearly, and which, despite what some people think, works just fine on a large monitor. It also has the very-useful “application name” menu item, which contains the miscellaneous entries that always get scattered around randomly in other menu systems. Seriously, has the “exit” item ever made sense in the “File” menu? In all, OS X ‘s UI is a noticeable step up from GNOME’s, and a giant leap up from Windows’s. In comparison to GNOME, it is more colorful and artistic, but also a bit more busy. GNOME is more subdued and spartan, but lacks some of Aqua’s excitement. In this regard, personal preference plays a big role, and I can see many people preferring GNOME’s more laid-back style. Interestingly, I find GNOME quite a bit more rigorously consistent in look, but I find OS X’s slight variances in appearance to be more pleasing in day-to-day use. Expose is great. I haven’t missed virtual desktops at all in OS X. My benchmark for window management is a long Matlab session. I’ve found that after a few hours of working in Matlab, I end up with a couple of dozen windows for simulation models, figures, graphs, m-files, etc. Windows absolutely chokes in this scenario, leaving me with a “Matlab” entry in the taskbar with a linear list of every open window. Of course, the list is full of completely non-descriptive names like “Figure 1”. Expose brings order to this chaos, and even allows me to switch to a text editor window to type up some results without completely messing up my workflow. Expose is a wonderful example of taking advantage of the natural strengths of human users, in this case image recognition, to solve a complicated interface problem. Since I stare at text all day, I am quite picky about the quality of text in the interface. After living with it for a week, I have to say my feelings on OS X’s text are mixed. One one hand, I have to congratulate Apple for using nice, large fonts throughout the OS. Apple’s default UI font is Lucida Grande at 13 pt, equivalent to a Windows font size of about 10 point. It is a pleasure to look at compared to Microsoft’s microscopic 8 point Tahoma font. OS X’s renderer seems to use very light hinting, so the rendered glyphs retain most of the aesthetic qualities of the original font. In comparison, Window’s ClearType renderer contorts glyphs until they their original forms are unrecognizable. Now, on the other hand, OS X’s fonts are quite fuzzy compared to ClearType’s. Having first used anti-aliased fonts in BeOS, I have no problem with fuzzy fonts. My main requirement is that the rendering be sufficiently sharp to keep me from getting eye-strain at the end of the day, and OS X’s renderer meets that criterion adequately. I must say, however, that FreeType with auto-hinting is still my favorite renderer. It is in between OS X’s renderer and ClearType in both shape accuracy and sharpness, and to my eyes it is a happy medium. One aspect of interfaces that often gets neglected is the tactile interfaces to the OS. Luckily, the tactile interface is something that Apple does not neglect. OS X’s keyboarding interface is excellent. While OS X is often considered to be an OS for new users, it is obvious that its keyboard shortcut mechanism is designed with power users in mind. The shortcuts are consistent, easy to reach, and thorough. The mousing interface is comparably good. Menus are very easily navigable, with hysteresis delays tuned to maximize the feel of control. It is in areas like these where Apple’s long background of hard UI research really shows through. I do have a couple of gripes about the UI. First, file typing is a mess. It seems to me that an OS as otherwise elegant as OS X should use a more elegant method of file typing than file extensions. The situation would be better if OS X at least handled file extensions properly, but it doesn’t. It seems to use a weird mix of extensions and creator codes that results in completely unpredictable behavior. For example, I originally wrote this article in TextEdit. When I opened it up in Safari, it would open it as a plain text file, not an HTML page. I tried copying the file to another name via the CLI, but that didn’t work. However, when I opened up VIM in the terminal, and pasted in the text of the article from TextEdit, the resulting file opened up as HTML in Safari. At this point, I still have no idea what happened, I’ve just taken to editing HTML files with Emacs instead. I also find it difficult to determine when a file extension will automatically be provided and when I must type one. I really can’t stand that Finder, by default, shows some file extensions, like .pdf, and hides others, like .png. In comparison, GNOME handles file typing wonderfully. I’ve never been surprised by what file type was assigned to a file, and the way the GTK+ file dialog is set up, it is absolutely clear what the actual file name will be when the document is saved. Also, it’s nice how in GNOME naming a file with a particular extension will cause the application to save the file in the format associated with that extension. I tried to do this in OS X’s screenshot utility, but instead of saving the file to the format I wanted, it simply yelled at me and said that the extension must be “.tif”. My last gripe my seem minor, but it has been bothering me quite a bit. Safari’s “Show in Finder” icon in the Downloads window looks exactly like the Spotlight icon. It is completely non-obvious that the icon has anything to do with the Finder without reading the mouse-over text. Actually, the “magnifying glass” element in general is overused in OS X, since it’s part of the Sherlock and Preview icons as well. Let me say this at the outset: software is the greatest strength of the Macintosh platform, and my primary motivation for buying the PowerMac. Between the high quality bundled applications and the wide array of available third party programs, the Mac has everything most desktop users will need. Since software is a rather personal matter, the best I can do here is to make some comments about the applications I use on a regular basis, and hope that the reader finds some overlap with his own usage patterns. First, a meta-comment about how applications are handled on OS X. I’m on the fence about the DMG installation method. It is quite a bit more manual than APT on Ubuntu. I miss being able to update all my applications from a single interface. Further, since there is no way to install ancillary files along with a program, the documentation that accompanies the program in the DMG usually gets thrown away with the DMG. It is also irritating that there are several different ways of installing apps. Most use the DMG method, but Apple’s own apps use a PKG installer. On top of that, Fink uses its own APT-like system. However, application installation on OS X is better than the cumbersome underlying mechanisms would suggest. The biggest advantage of OS X is that its developer base can, and does, target a single version of the OS. Furthermore, because of the expectations of users, developers take the time to package their applications very nicely. The difference between the two platforms is mainly noticeable with obscure software that doesn’t normally get packaged by major distributions. Okay, let’s get the Finder out of the way. The OS X Finder seems to engender love-it or hate-it reactions, and well, I’m in the latter camp. I don’t hate it, per-se, but I don’t like it very much either. I have to admit that using GNOME for the last six months has really made me a spatial junkie, and I really do not like going back to the browser paradigm. Also, the OS X Finder really is as unpredictable as everyone says it is. There is a weird mix of spatial remnants in with the browser style. For example, if you click on “Macintosh HD” when you have an existing Finder window open, it will open a new Finder window unless the existing one has “Macintosh HD” open. However, it doesn’t carry this exclusivity through the interface, since when you open another Finder window, you can browser to “Macintosh HD” in both. I could gripe about the Finder some more, but a far better reference for Finder’s issues would be John Siracusa’s articles about it at ArsTechnica. On a happier note, let’s consider Spotlight. Spotlight is an enormous boon for a user like me who hates organizing files manually. It’s extremely handy, only a shortcut key away, very fast, and has an excellent success rate in finding relevant files. The fact that it searches file contents as well as file attributes puts it a step above the search implementation in BeOS’s BFS. Live search folders are as handy as they were in BeOS. I use them to organize my e-mail messages, which allows me to find e-mails from a particular professor, for example, just by opening a folder. I keep trying to find more uses for Spotlight as I go along, but I’m already very satisfied with its performance. Apple did an excellent job of integrating search into multiple places in the UI, such as the Finder, the file dialog, and the menu bar, making the feature much more useful than it would have been with just a single point of access. I really like how Apple integrated meta-data annotations into the UI. The file information window simply has a plain text field named “Spotlight Comments”. This simplicity makes a lot of sense. Given the users who will rely on Spotlight, the ones who don’t organize their files into neat directories, files aren’t likely to get detailed typed annotations. A simple free-form text field, however, has a hope of getting used.
X2:
Processor: AMD Athlon X2 4400+ @ 2.2 GHz
Motherboard: DFI LanParty nForce4
Cache Memory: 1MB of full-speed on-die L2 per core
System Memory: 2GB of CAS2 DDR-400 SDRAM
Graphics Card: Gigabyte GeForce 6600GT 128MB
Hard Drive: Seagate 7200.8 250GB SATA
Optical Drive: MSI SATA CD-RW
Power Supply: Seasonic S12 380W
Case: Antec P180
Operating System: Ubuntu AMD64 5.10
G5:
Processor: IBM PowerPC 970MP @ 2.3 GHz
Motherboard: Apple Proprietary
Cache Memory: 1MB of full-speed on-die L2 per core
System Memory: 2.5GB of CAS4 DDR2-533 SDRAM
Graphics Card: Apple GeForce 6600 256MB
Hard Drive: Western Digital WD2500JS 250GB SATA
Optical Drive: Pioneer DVD+RW
Power Supply: Apple 600W
Case: Apple PowerMac G5 Case
Operating System: Mac OS X 10.4.3
Shared:
Monitor: Dell 2405FPW LCD @ 1920x1200
Keyboard: Cherry Cymotion Xpress
Mouse: Microsoft Intellimouse
Spotlight Results
Moving to the other much-ballyhooed OS X feature, Dashboard, I have to say that my reaction is: eh? I simply don’t find it particularly useful, at least in its present incarnation. I don’t need a computer to tell me that it’s cold outside on a November morning in Atlanta. I don’t need to look at the clock, because there is a nice digital watch right in the corner of the screen. I have never found a tedious on-screen calculator useful, preferring the real one on my real desk. I suppose Dashboard has potential, but as it is, it tells you either what you didn’t need to know, or what you already did.
Next, let’s consider Safari, OS X’s standard web browser. Safari is, in general, a decent browser. Its rendering quality is fairly good, though not as good as Firefox’s, and its integration into the overall environment is superb. Its interface is classically Apple, with a very minimal combo toolbar and address bar, and sparse yet eminently useful menus and context menus. It is configurable enough, and offers just enough features, like the integrated Google toolbar, to keep me from straying elsewhere. The RSS feeds feature, in particular, is very nice, and quite useful for those who like to keep up with Slashdot, OSNews, CNN, and other news sites. My biggest complaint with Safari is bugginess. Safari leaks like a sieve. I’ve been running it all day, and now with no windows open, it’s using 230MB of RAM. Its also unstable. In the week, Safari has crashed on me at least three times. Further there are occasional visual glitches, such as bits of horizontal scroll bars “sticking” to the content area during vertical scrolling. Finally, there is an infuriating bug which will cause the browser to sometimes go back several pages after hitting the back button just once. Whenever I try to get back to my GMail inbox after reading a message, I’m thrown back to whatever website I was browsing before. Normally, I’d be quite pissed off about a browser that has crashed three times in a week, given that Firefox was very stable, but given the instability of recent versions of Firefox, I’m willing to cut Apple some slack. I am, however, like most OS X users, waiting for 10.4.4.
Safari RSS
I was originally going to have separate sections for iTunes, Mail, and iChat, but then I realized: who cares? iTunes plays music, Mail shows you your mail, and iChat lets your friends send you funny links. They all do their job just fine, and behave just like every other jukebox/e-mail client/AIM client in existence.
The OS X terminal is quite a bit better in its Tiger incarnation than it was in early versions of the OS. It has proper clipboard integration and acts as a drag-and-drop target for files. It features command completion, a command history and decent amount of flexibility with regards to terminal emulation. Its interface leaves something to be desired, however. It doesn’t anti-alias fonts by default, and figuring out how to enable the feature is non-intuitive. Terminal’s preferences menu doesn’t actually contain any preferences, those are in a separate “window settings” menu. Terminal’s biggest weakness is that it is glacially slow. I thought GNOME’s terminal was glacially slow, but OS X’s redefines glacially slow. Let’s just put it this way: one of my Lisp compilers has a warning in the documentation to not run the build from OS X’s terminal, because Terminal’s slow handling of the compilation messages actually slows down the build significantly!
Normally, I would consider a system preferences app too minor to include in a review, but OS X’s deserves some recognition. OS X’s preferences tool exposes a surprising amount of configurability, and has very good tools for network and firewall configuration. Certainly, it is more powerful than GNOME’s spartan configuration tool. However, it exposes its functionality without ever making the interface seem cluttered or hard to manage. Given its excellent organization, Spotlight integration seems a bit redundant, but it’s a nice touch. Overall, it’s a perfect example of how beautiful OS X applications can be. The slick Platinum title bar blends nicely into the main window. The layout is precise, and everything seems like it is in the proper place. The window changes size smoothly as required to contain the widgets within. Clearly, a great deal of attention went into making this overall excellent preferences application.
Spotlight Preferences
One of my favorite apps on OS X is TeXShop, a front-end to the TeX typesetting system. TeX is a markup language, like HTML, that is extremely powerful and well-suited to writing everything from scientific articles to novels. It is, in short, everything that Microsoft Word aspires to, but knows that it can never achieve. TeXShop is a sublime front-end to a number of different variants of TeX, notably LaTeX and ConTeXt. Its interface is simple and elegant, certainly deserving of the design award it won from Apple. At the same time, it exposes a great deal of power. It has large set of pre-built templates, for generating the code for things like tables and figures, a macro editor, and good AppleScript integration. It does tag completion, brackets-matching, and spell-checking. In all, if I was stuck on an island with a thesis to complete, TeXShop is the one application I would want to have with me.
TeXShop
Another app that I couldn’t live without on OS X is Aquamacs. Aquamacs is Emacs with a native Carbon GUI. Its integration with the OS is fairly good. It uses the standard top menu bar, but doesn’t have a native-looking preferences panel or have native context menus in the editor windows. However, it has excellent support for native OS X shortcut commands, which means that the app is completely usable without having to learn Emac’s horrid and user-hostile shortcut sequences. Aquamacs also integrates with an Emacs package called SLIME, which turns Emacs into a top-notch Lisp IDE. SLIME, in turn, integrates with an excellent Common Lisp system for OS X called OpenMCL, which features a fast, native-code compiler, good code generation, and great error messages. Similarly productive environments for Lisp development are available on other platforms, but the Aquamacs/SLIME/OpenMCL combo is the first one I’ve found that combines the significant virtues of running on a UNIX platform, integrating with the host GUI, and being free.
Aquamacs and SLIME
I hate to end things on a depressing note, but I need to say a few words about another application I use regularly, Matlab. Matlab, in its Mac incarnation, sucks. Of course, it sucks on every platform, so that is not too much of a complaint. Matlab is an X11 application, and very clearly clashes with the rest of the platform. It doesn’t use the top menu bar, preferring to it an in-window menu bar. Matlab is an eclectic mix of a main program using a custom toolkit, and add-on programs using Java’s Swing. The latter are rather buggy, and are brain-dead from an interface point of view, but are made tolerable by OS X’s good Java implementation. One thing Matlab brings out, through no fault of its own, is the uneasy truce between the CLI and GUI worlds in OS X. OS X’s GUI/CLI integration is not nearly as good as it was on BeOS. The UNIX filesystem looks quite different from the Mac one, and it is always confusing to use Matlab, which has a mixture of both by virtue of having its own file browser as well as using the standard OS X file dialogs. On the point of GUI/CLI integration, OS X is definitely inferior to GNOME on Linux.
Matlab and X11
OS X’s performance has always been contentious issue. I myself found early versions of the OS to be unusable, given the machines on which it was expected to run. Luckily, 2.3 GHz will light a fire under even the fattest software. In a week of usage, I never found the machine to be slow. It doesn’t feel super-fast, the way BeOS used to, but it never lags under load or otherwise freeze up. I was quite impressed about the speed of the OS X widget set. OS X’s widgets feel very fast and smooth. For example, Spotlight’s result list always scrolls and resizes smoothly, even when it is displaying hundreds of hits. Of course, another advantage of OS X’s graphics system is that redraw is always flicker-free, since the desktop is double-buffered. This feature lends an incredible feel of solidity to the desktop, one I find far preferable to Windows’ faster but “twitchier” behavior. The performance picture isn’t all rosy, however. Darwin really is slow, and it’s not just something a server administrator will notice. Basic things I do in the CLI, such as expanding large archives, feel much slower on the G5 than on the X2. Though I did not include compile performance in the benchmark results below, because compiling to PowerPC and to AMD64 are two different things, I found that the G5 is perhaps two-thirds as fast at compiling software as the X2. The blame for this issue can be distributed among the G5 CPU, for its mediocre integer performance, and Darwin, for its mediocre file I/O performance.
At this point, I would like to present some benchmarks I’ve conducted, but want to preface them with a warning. The benchmarks below are not designed too show off the G5 or the X2. They are not designed to show the absolute best performance achievable on each platform. The SPEC benchmarks, the results of which are easily available, do that very well. What these benchmarks are designed to do is to give an idea of what the machines will actually behave like running real software. Therefore, I didn’t use XLC on the G5 or Intel C++ on the X2. I used good old GCC 4.0.1, which is the standard compiler on both OS X 10.4 and Ubuntu 5.10, and is the one with which most applications on these two platforms will be compiled. I should also point out that the use of GCC gives an advantage to the X2, not so much because GCC optimizes better for x86, but because the Opteron architecture is much more forgiving of mediocre code generation. I consider this a fair arrangement, because the reliance on magic compiler technology for good performance is as much a design flaw as a crappy FPU or slow memory bus. In the world of SPEC, the processor has the privilege of running highly-tuned code. In the real world, it runs whatever the user wants to run, which, more often than not, will be mediocre code from a commodity compiler. I would also like to make a comment about benchmarking in general. I am not someone who is impressed by small constant factors. I consider differences of 5%, the kind that gamers get excited over, to be statistical noise. I do not think most people will even notice a difference of 10%. At 20%, the differences become noticeable, if one is looking, but I can’t say I’ve ever been in a situation where 10 minutes would have been too long, but 8 minutes would have been acceptable. From my perspective, unless I’m trying to show off, the difference has to be in the 30% range before it matters.
The benchmark lineup below is designed to reflect the type of programs I use on my machines. I use my computer for web browsing, listening to music, writing code, writing reports, and running engineering software and simulations. As I mentioned earlier, all the benchmarks were conducted using GCC 4.0.1. In the case of the G5, it was the version in XCode 2.2, while for the X2 it was the default version in Ubuntu 5.10. The compiler options used were as follows:
G5: -O3 -mcpu=G5 -mtune=G5 -mpowerpc64 -mpowerpc-gpopt -funroll-loops
X2: -O3 -march=k8 -funroll-loops
The X2 doesn’t really care about the compiler flags, but the G5 does. Using the above flags improved the performance of the G5 noticeably compared to the “-O2” option I normally use. It should also be noted that I didn’t use Apple’s ‘-fast’ metaflag, for a number reasons. First, it is partially redundant. It specifies several statements that are on by default anyway. Second, it slowed down benchmarks, relative to the above flags, as often as it sped them up. Third, on the one case where it did show noticeable improvements over the above flags, nbench, it also generated code that could not complete the Neural Net portion of the test. The problem flag in question, -ftree-loop-linear, caused Neural Net to hang on both the G5 and X2. At this point, an astute reader will notice that, using the aforementioned compilers and compile flags, the following benchmarks were run in 32-bit mode on the G5, and 64-bit mode on the X2. This difference was intentional. The G5 is fastest when running 32-bit code, and the X2 is fastest when running 64-bit code. Moreover, OS X and its apps are almost completely 32-bit, while Ubuntu and its apps are almost completely 64-bit. Not only was each processor running in its fastest mode, but each was running the type of code it would run during normal usage.
One last note before getting to the benchmarks. Each benchmark description specifies the units in which the result are expressed, as well as whether higher or lower values are better. Listed along with each result is a ratio specifying the relative performance of the G5 to the X2. Regardless of the unit of the benchmark, “ratio” values over 1.0 mean the G5 was faster, while “ratio” values below 1.0 mean it was slower.
The first benchmark is a test of MP3 encoding performance using LAME. This benchmark should test the processor’s integer performance on large streaming data sets. The version of LAME used was 3.96.1. The source data was Live’s “Birds of Pray” CD, ripped by iTunes to WAV format. The results below are the time, in seconds, taken to encode the entire CD. Lower values are better, and are the average of three trials is reported.
G5 X2 Ratio
LAME: 4m41s 4m51s 1.04
This particular benchmark is a good showing for the G5. For all intents and purposes, the two processors achieved the same result. Now, the G5 should have achieved a slightly higher result, given its significantly greater memory bandwidth, which indicates that its integer performance is weaker than the X2’s.
The next benchmark is TSCP, a chess AI. This test should be a good indicator of the system’s performance on branch-heavy integer code, such as many AIs and other forms of decision logic. The version of TSCP used was 1.81. The results below are in thousands of decision nodes visited per second, with higher values being better and the average of three trials being reported.
G5 X2 Ratio
tscp: 303.3 388.7 0.78
This result shows what happens to the G5’s long 16-stage pipeline when many branch mispredictions occur. Like the Pentium 4, it suffers a significant penalty on such types of code.
The next benchmark consists of a pair of neural network simulations. Please see the benchmark’s site for a description of the kernels. The results below are in seconds required to complete each test, with lower values being better and the average of three trials being reported.
G5 X2 Ratio
BPN: 5.16s 5.63s 1.09
SOM: 1.34s 1.42s 1.06
Interestingly enough, the G5 beats the X2 in this integer benchmark. The reason appears to be a combination of several factors. First, while this benchmark is integer-heavy, it has very few branches and consists mainly of a large number of memory accesses to multi-dimensional arrays. Since these arrays fit in cache, the G5’s slightly lower-latency L2 and 5% clock-speed advantage allow it to edge out the X2.
The next benchmark is typesetting a large TeX file using pdfeTeX. This benchmark is a purely integer benchmark that tests the processor’s performance in manipulating tree-like data in cache. The specific TeX program used was pdfeTeX version 3.141592. The results below are the time taken to produce a PDF from the source file, in seconds. Lower values are better, and the average of three trials is reported.
G5 X2 Ratio
physics.tex: 1.241s 0.950s 0.77
These results are consistent with what we saw regarding the G5’s integer performance in the tscp benchmark. It should be noted that the source file was small enough to fit into cache. Had the source data been large enough to spill into memory, the results for the G5 would likely be a bit worse, given the random memory access patterns involved in the benchmark and what we can see of the G5’s memory performance in the later benchmarks.
The next benchmark is SciMark2. Two variants of the benchmark were used, one with a small data set and another with a large data set. Note, I do not report the composite score, because it makes no more sense to me to sum the results of completely different benchmark kernels than it does to sum vectors and kumquats. Please see the SciMark FAQ to see what each test kernel does, and then compare each result individually. The results below are in MegaFLOPs, with higher values being better, and the average of three trails being reported.
G5 X2 Ratio
Small FFT: 583.4 585.3 1.00
Large FFT: 32.0 56.0 0.57
Small SOR: 434.0 515.2 0.84
Large SOR: 394.1 505.9 0.78
Small MC: 86.3 261.9 0.33
Large MC: 86.6 260.6 0.33
Small SMM: 744.7 722.2 1.03
Large SMM: 312.2 351.9 0.89
Small LU: 1202.9 837.6 1.44
Large LU: 411.5 424.6 0.97
Well, these results are all over the map. The most noticeable thing is that the X2’s performance degrades much more gracefully when we go from the small, in-cache data set to the large, in-memory data set. Clearly, the X2’s low-latency memory subsystem gives it a leg-up in these benchmarks. Even with 33% more memory bandwidth and 5% more clock speed, the G5 loses in nearly every test. In most of these cases, the G5 is 80% to 90% as fast as the X2, and is 44% faster in one case, but in three of the benchmarks, it is about one-half to one-third as fast. Poor code generation by GCC on a couple of these kernels is a likely explanation.
The next benchmark is a C version of the classic LINKPACK benchmark. LINKPACK is an extremely simplistic benchmark that measures the raw FP throughput of the processor by doing large matrix multiplications. It represents the best case for FPU performance on a particular processor. The variant below is a double-precision version of LINPACK. The results below are in MegaFLOPs, with higher values being better and the average of three trials being reported.
G5 X2 Ratio
LINPACK: 1391 876 1.59
The results of this benchmark are great for the G5. It is well over 60% faster than the X2 in this benchmark. This result is to be expected, since the G5 has a floating-point multiply-accumulate (FMAC) instruction, which is an important part of the matrix multiplication algorithm. On the X2, the compiler must generate two separate instructions to have the same effect as one FMAC.
The next benchmark is a Fourier transform, specifically the one distributed with the FFTW 3.0.1 source code. Four variants were used here, a small and large double-precision FFT, and a small and large single-precision FFT. All are in-place, complex, forward transforms. In this benchmark the small data-set result is more important. I don’t know of any particular uses for 1 million point FFTs, though I’m sure there are some. In comparison, 4096-point FFTs are quite common in signal processing, and JPEG2000 uses a related algorithm on similarly sized blocks for image compression. Since AltiVec cannot do double-precision math, both AltiVec and SSE2 were disabled for the double-precision tests. For the single-precision tests, AltiVec is enabled, but for some reason FFTW generates improper assembly for SSE2 (on GCC 4.0), so in fairness no numbers are reported for the X2’s. However, the G5 results are reported anyway, because they show that the G5 makes one heck of a digital signal processor! Moreover, these results are likely quite representative of highly-tuned AltiVec-friendly algorithms. As before, the results below are in MegaFLOPs, with higher values being better and the average of three trials being reported.
G5 X2 Ratio
Double (4K): 2594 2044 1.27
Double (1M): 777 662 1.17
Float (4K): 3709 * *
Float (1M): 915 * *
These results support my belief that GCC generated poor code on the G5 for the FFT in SciMark. The G5 wins by a quite a margin in both cases, even though we can again see the X2 closing the gap when memory performance comes into play.
The next benchmark is raytracing with POVray, using the built-in POVray benchmark. This benchmark tests the performance of the CPU on an essentially floating-point algorithm with a significant integer component. Random access memory performance also comes into play for accessing many of the bookkeeping data structures necessary to perform the actual raytracing step. The version of POVray used was 3.6.1. The results below report the time required to render the 384×384 test image. Lower values are better, and the average of three trials is reported.
G5 X2 Ratio
POVray: 32m32s 25m31s 0.78
We see here that despite its very impressive theoretical floating-point performance, the G5’s integer performance hurts it in this real-world application. While some would argue that POVray isn’t optimized for the G5, it should be noted that POVray isn’t exactly optimized for the X2 either. Although, I think it is safe to say that the G5 has more untapped potential in situations like these than does the X2.
The next benchmark is a Blender render using the pseudo-standard Blender benchmark file test.blend. A word of warning about this benchmark. I could not get it to compile properly on OS X with GCC 4.0. As a result, I used the standard binary for OS X. I do not consider these results fully representative of what the G5 can do, but have included them anyway, because in the real world, it is rarely the case that all of one’s applications will be using the vendor’s absolute-latest compiler. This test stresses a number of different aspects of the system, since rendering involves both floating-point and integer operations on large in-memory data sets. This test should be a good indicator of how the G5 can be expected to run media applications not specifically optimized for a particular processor. Note that this test is dual-threaded, so it uses both cores on each CPU. The results below are in seconds, with smaller values being better and the average of three trails being reported.
G5 X2 Ratio
test.blend: 1m34s 1m15s 0.80
These results are about as expected, and very similar to the POVray result above.
Moving on to the mixed-code benchmarks, we have FreeBench, a free cross-platform benchmark suite. The version used was 1.0.3, compiled from the UNIX tarball for both machines. Some scripts were edited on OS X to make the suite build properly, but no code was changed. A description of the test kernels can be found here. The results below are speedup relative to a 333MHz Sun Ultra 10, with higher values being better and the average of three trails being reported.
G5 X2 Ratio
Analyzer: 7.71 3.28 2.35
4 in a Row: 12.01 16.97 0.71
Mason: 8.76 11.42 0.76
pCompress2: 6.59 9.68 0.68
PiFFT: 5.72 6.71 0.85
DistRay: 7.59 8.88 0.85
Neural: 3.56 7.89 0.45
Aside from the two outlying values, these results are fairly consistent with what we’ve seen so far. The G5 performs around 75% as well as the X2 on integer code, and 85% as well on general floating-point code.
The last benchmark is nbench, which is a total processor benchmark based on the original ByteMark. The version of nbench used was 2.2.2. Again, I think combining the results into a composite score is meaningless, so I do not report the final average. Please read the description of each kernel here and evaluate each result individually with consideration to what performance parameters are tested by each kernel. The results below are speedup relative to a K6-233, higher values are better, and only one test was run since nbench has its own trial-repetition and averaging logic.
G5 X2 Ratio
Numeric Sort: 7.19 9.43 0.76
String Sort: 25.28 13.92 1.82
Bitfield: 7.03 11.97 0.59
FP Emulation: 17.42 16.88 1.03
Fourier: 17.14 12.79 1.34
Assignment: 26.18 35.59 0.74
IDEA: 19.18 24.08 0.79
Huffman: 12.48 14.71 0.85
Neural Net: 2.03 28.10 0.07
LU Decomp: 34.59 45.43 0.76
All in all, this benchmark is a decent showing for the G5. It loses two benchmarks by a large margin, wins two by a large margin, ties one benchmark, and is between 75% and 85% as fast as the X2 on the rest. The neural net result is incredibly bad, but that is partly the result of compiler optimizations. When using “-Os”, the G5 achieves a score of around 14 on this trial, though it scores in all the other kernels suffer significantly.
The above benchmarks present a lot of numbers to go through. However, a few trends clearly present themselves. For general integer code, the G5 shows itself to be about 75% as fast as the X2. For general floating point code, it seems to be 85% as fast. For code that really works to its strengths, it can be up to 50% or more faster. Of course, on code that really hits its weaknesses, or hits bad compiler optimizations, as may be the case, it can be 50% or more slower. Some observations we can make are that the G5 seems to be quite picky about code generation, given the many cases above where its performance drops to a fraction of that of the X2. We can also see that its memory controller is relatively poor compared to the X2’s, since its performance drop in the large data tests is always larger than X2’s performance drop. The G5 seems to have an excellent FPU, which shines in some particularly suitable benchmarks but is more often than not hampered by its mediocre integer performance and memory subsystem.
Let me address the inevitable “Ubuntu vs. OS X” comparison. The thing that surprised me most about using the PowerMac was not that OS X’s UI was better than Ubuntu’s, which I have known for quite some time having used both regularly, but how small that difference really was. As I said, OS X’s UI is a step up from GNOME’s. However, I wouldn’t say it is in a completely different league. In terms of the fundamental UI elements, GNOME is extremely competitive. In some areas, file type management as I noted above, GNOME is better, largely because it is more rigorously consistent. On the other hand, OS X’s interface is faster, sexier, and a touch more polished, which is icing on the cake. In terms of ease of configuration and maintainability, both machines were comparable, which is expected given that each machine was purposefully built to work with its respective OS. Lastly, Linux, as a kernel, is miles ahead of Darwin. Ultimately, if it were just a matter of “Ubuntu versus OS X”, I’d see little reason to keep the PowerMac. What really makes the Mac shine are its applications. Even commonly-used OSS programs, like VLC, are better in their OS X incarnation. Add to that the software you can only get on OS X, TexShop and Aquamacs in my case, then there is a compelling argument for OS X.
The Value Judgement
I would like to, for a moment, address the issue of value. The question is: is the PowerMac worth it? This is a complex question, with subjective elements, but there is no reason that it cannot lend itself to analysis. The PowerMac, as configured, would cost $2700 retail, if the added RAM were bought from a third party. The X2 cost $1700, and factoring in a hefty profit margin of 20%, what Apple makes on its high-end machines, would sell as a $2000 retail machine. The price difference comes to $700, just a bit below the average retail selling price of desktop computers. For the extra $700, the PowerMac is slower, louder, uses cheaper parts, and runs OS X. The mathematics of the situation are clear. If OS X is worth an extra $700 plus performance and silence to the buyer, the PowerMac is a good value. Since the PowerMac is staying by my desk, it’s obvious what OS X and the software that runs on it is worth to me. However, to very many people, OS X and its applications will not be worth that much.
The Switch
I would also like to address “The Switch”. The way I see it, switching was a very good thing, for multiple reasons. From a performance standpoint, the 970MP is not the savior of PowerPC on the desktop. It is, simply, too little, too late. A gross generalization of the above benchmarks would peg the G5, when running code not specially optimized for it, at about 85% the performance of the X2. As I said before, I consider this insignificant. However, let’s look at it from a marketing perspective. 85% of the X2 4400+ puts the mid-range 970MP squarely below the lowest-end Athlon X2, the 2.0 GHz 3800+. That’s not a good thing when the chip is in a machine that costs more than its competitors’ high end ones. There is, of course, the issue of the upcoming PA Semi PowerPC chip. In my opinion, the PA Semi chip represents precisely what screwed PowerPC on the desktop to begin with. It is an embedded chip, and we all know how great it was for Apple to rely on an embedded chipmaker to supply their desktop processors. Furthermore, it’s performance is lackluster compared to today’s machines, much less the machines of 2007 with which it will compete. It’s SPECFp is good, about as high as the best Opteron’s, but its SPECint is at the level of the slowest Opterons available. The PPC970 and the Pentium 4 should have been a lesson to people that a chip with great floating-point performance and mediocre integer performance is not going to fly with developers. This is especially true considering that FPU performance is becoming less important with the advent of GPUs and GPU-enabled media libraries like CoreImage. PowerPC-lovers need to look hard at the Opteron and see why the market has embraced it so well. The Opteron is not the leader in any one category. However, it is an extremely well-balanced chip that shows great performance in a large variety of problem domains. It doesn’t need great compiler technology to perform well, and it doesn’t choke on particular types of code. It has a decent price, decent power usage, and is nicely scalable. It is, in short, everything that the PPC970 isn’t, that the Cell isn’t, and what Intel is hoping Conroe and Merom will be.
The Last Word
The PowerMac G5 is a frustrating machine. As a piece of hardware, it is inferior in almost every way to the Athlon X2 machine sitting next to it. That is not to say that it is a bad piece of hardware, but rather that it can’t match up to a very excellent machine. However, software is what counts and OS X is the platform that runs the applications I want to run. For that reason alone, the PowerMac isn’t going anywhere any time soon.
About the Author
Rayiner Hashem is a student of aerospace engineering at the Georgia Institute of Technology. His interests in the field of computing include operating systems, programming languages, and compiler technology.
If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.
I haven’t read such a professional review (comparison of two systems) in a very long time! Congratulations – this is how reviews/tests should be written!
It is also brave – in my experience, (I can accept that it is a small but vocal minority of the userbase) mac users don’t like criticism too much, even if it is supported by hard evidence.
I must fully agree, even though it’s almost a bit too undepth for my liking
Rayiner’s writings have always been a high-point.
Criticism is ok and accepted, but:
1) He paid too much for the computer, I paid almost $500 less for the similar configuration. (2 GB RAM, all Apple RAM)
2) I too have been a hard drive “bigot” preferring Seagate drives but the only drives that have failed for me in the past two years have been Seagates. My Western Digitals oddly enough have all been running fine.
3) My dual core G5’s (2.0’s and 2.3) are all very quiet, not sure what the beef is there.
4) The G5’s are too big, too much metal.
I also liked the review a lot. Count me in as one of the majority Mac fans that DO like criticism.
There once was a time that Apple did use the highest quality parts or so I read. Now, it’s all about the margin and possibly what supply they can get (while still making things ubercool). I cannot recall which drive came in my dual chip 2.3 from May, but I think it was/is a Seagate. Not knowing better, I added a WD.
I’m disappointed to find that the engineering to combat sound is still not the best. I guess I can barely live with the cheap bearings in the hard drive but cannot fathom why they didn’t put sound isolation into the mountings as that seems the most basic. That probably explains why some people can hear the CPU make noise when it goes into NAP mode. I also found it interesting to know about the airflow & cheap fans.
This begs the question: will the Intel based PowerMac have a case redesign?
Apple has always had issues with value on their top end workstations.
I’ve heard several reports that OS X on Intel is “noticibly faster” than it is on even the fastest G5s. This is pretty impressive, if true, since we’re still months away from a shipping version.
This really shouldn’t be surprising, as NeXT had several years getting things right on x86 prior to the Apple purchase, and the PowerPC port didn’t even start until after the purchase.
That said, the issue with the Intel version won’t be native performance, which will be excellent, it will be performance of Rosetta apps, and people not realizing the difference. That is the gotcha. While so many applications aren’t Universal, that will be the achilles heel of the transition.
That said, you are seeing a large number of Universal Binaries starting to appear, and that is a VERY big deal for Apple. The more of those out there, the easier the transition will be.
I do have a DTK, and have alot of first hand experience with the environment, and cannot and will not give any details about the performance, I will say that to me, it looks like the right choice on their part.
Now, I do realize OS X x86 is not production yet, but from the tests I have seen comparing it with XP:
OS Boot up/shutdown
——————-
OS X
Windows XP
Linux
Application Preformance
———————–
Linux
Windows XP
OS X
OS X really needs to deal with its threading issues:
http://www.anandtech.com/mac/showdoc.aspx?i=2520&p=8
http://sekhon.polisci.berkeley.edu/macosx/
http://reviews.zdnet.co.uk/software/os/0,39024180,39235916-3,00.htm
I would venture to OS Xs preformance issues are due to OS X rather than the G5. In which case jumping to slighty fasters P4s is only a bandaid.
It is, but the design of the OS is better suited to the design of the x86 than it is the PowerPC, so the change will have a far greater impact. Further, Windows XP Application Performance is < Win2k, < NT 4. Think Vista will change that? (judging by early betas, no).
That’s not true of OS X, where each successive version has improved.
The issue is that OS X is the only OS in the list that uses a microkernel, and you do pay a price for that. You get other benefits though, and ultimately they trade off. Not too mention that only in gaming is eeking out that last bit of sprite performance really that critical. You forget that 90% of the computers in the world never lift a heavier load than a middleweight Excel spreadsheet.
I understand a micro-kernel implies a level of overhead; however I would go so far as to say the Darwin micro-kernel suffers from mediocre implementation. OS X suffers from more severe thread concurrency and kernel locking issues than micro-kernels I have worked on or used.
I do agree with you OS X (which has an excellent UI) performance is increasing with each build. This is an interesting trend since Linux* (which has excellent performance) is becoming increasingly user friendly. I look forward to these trends meeting in the middle.
As you said most user applications are relatively light weight; but it is ridiculous for mp3 encoding to take 40 seconds longer than the same hardware running XP (see my original 3rd link). I hope apple uses the compatibility breaking x86 transition to revamp their kernel locking and address other issues that they may not have wanted to approach before for fear of breaking compatibility, now would be the ideal time.
*Linux, I know is a kernel, so I will use Ubuntu in this case to represent “Linux”
I agree, that there remain issues in the kernel, some of which I don’t think will ever be fully resolved, as they appear to be the result of conscious design compromises based upon the targeted deployment.
Others I think will be resolved, thread concurrency being one of them, because the multiple processor deployments rising will force it be. When we start talking 4,6,8 etc, it should become even more of an issue.
At the same time, you are right, most of the Linux flavors are improving. I find it interesting that Ubuntu seems to be the flavor du jour in the Linux geek world. I still find Fedora and Suse to be the most end user friendly distro’s, and the money behind them really shows.
“The issue is that OS X is the only OS in the list that uses a microkernel”
RTFM
http://www.windowsitpro.com/Windows/Articles/ArticleID/2984/pg/2/2….
NT isn’t a microkernel at the moment, nor have it been for a very long while. In NT4 it wasn’t a microkernel.
NT is a hybrid between a microkernel and a monolithic kernel.
http://en.wikipedia.org/wiki/Kernel_%28computer_science%29
I don’t think anyone said NT was, it’s always been an odd hybrid more like VAX in design, though this shouldn’t be a surprise since David Cutler was heavily involved in both projects.
I was simply replying to a Viton, since he clearly considers NT as a microkernel.
I found Rayiner’s tone for parts of the article almost whiny and very annoying. If he didn’t like the internals of the PowerMac, then why did he spend the money on it?
The noise issue is one that is no big deal for me, from time to time I have to go into the data center where there is plenty of noise. I think he makes too much of a big deal about the noise of the PowerMac. And there should be a difference between a machine he spent time and money on quieting and one “out of the box”, DUH! Again, I am sure he had the opportunity to examine an operational machine before he spent his money. If he had a problem with noise it should have came up there (it would have for me). I have an Sun Enterprise 3000 at home and it sounds like a plane taking off, I got used to the noise.
The comment about the “cheap” Western Digital drive, where the Hell did that come from? I checked PriceWatch and the Seagate drive he feels is “superior” is priced about the same as the Western Digital in the PowerMac. Obviously Rayiner is not aware of the history of Western Digital, or the quality of their products. Based on my experience I wouldn’t call them “cheap”, or better yet, ask anybody who uses a Raptor.
The software part of the article, well I wouldn’t use “pissed off” to describe anything in something that I would want published. That terminology is fine for conversation and blogging, not a “professional” piece. His comments about Matlab should not have been included if it “sucks on every platform”, not just on the PowerMac, what’s the point?
During the time it has taken me to examine his article several people have commented positively about it, unfortunately I don’t agree. Rayiner needs to learn brevity, what he says could have been said in fewer words.
I did not find him to be whiney at all. I also disagree about noise. You really don’t want the noise of your computer competing with your thought processes and your productivity. You can’t help but notice the fluctuations of the fans (at least with my iMac) as you perform certain operations.
It is amazing how loud the fans can get when you hit a web page with a couple of flash advertisements.
I agree with you about the WD drives. They have been of good quality and to the contrary, I find my seagate drive to be noisy and annoying with its clicking. But maybe I have a cheaper seagate drive.
I just didn’t like Rayiner’s writing style, which compared to other articles here could stand some work (as you have said about its organization).
I spent almost 10 years on aircraft carriers, and even when they are in port, there is lots of noise. I learned how to drown out a great deal of background noise so I could concentrate on my job. The noise of one or two computers pale in comparison.
Then again how many of us have served on an aircraft carrier?
You are right, still, if you prefer silence to do your job and that is what you are used to, it makes a difference.
I have to agree. Maybe this is from laptop use, but I have a reaction to try and stop doing heavy tasks when i hear a fan kick up faster…
It’d be preferable to just have the fan run hot all the time, or a really nice water-cooling system.
Anyway, I’m hoping to see Intel center more on low watt heat output chips; then I can buy a PC that’s not a lawn mower but can still play a game…
In the interim, I recommend putting your desk next to a closet.
Funny you should say that… we are moving soon (again – but just across town in a house we are building) and I believe my wife is relegating me and my computer to our walk-in closet.
Something to be aware of regarding noise as it applies to the target market.
In a home setting, a quiet computer, preferably nearly silent tends to be preferred, and if you look at the Apple product lines, I think you’ll see that trend. All of the consumer oriented machines have a focus on quiet operation.
The pro machines have a primary market towards the office setting, a setting where noise of that nature is a different beast altogether. In many, if not most offices, there is significant ambient noise, as well as in many cube farms, a white noise generator that sounds suspiciously like a fans or light television static. This noise has a purpose, and that is to help insulate the staff from the droning noise of many voices bleeding over the tops of the cubes.
I recognize that many PowerMacs are used in home settings (my dual 1.8 G5 is), and it does tend to be loud when the fans spin up to full spped, which only happens when I’m really pushing the machine, but it is no louder than the ambient noise in most workplaces, and that is the misconception.
If anything is ‘whiny’ here it’s your post. You confuse commenting on ‘quality’ with being surprised with what you get. Mentioning the noise of the G5 receives “well of course it’s louder than something you made yourself [plus I’m ignoring your comments about those Dell machines].” Referring to a model of WD HDD as “cheap” brings up the Raptor (which is neither quiet nor the HDD mentioned). I think the noise here is overstated and “cheap” from a quality perspective undeserved, but you could mention a review from Storage Review rather than bring up a completely unrelated product.
Getting used to noise pollution is “damage.” It’s great you don’t care, but that has little to do with the quality of his review.
I don’t think the review is completely coherent, and I’ve certainly seen better, but it’s not especially bad. Colloquialisms are the least one has to complain about in the age of the blog, and if anything really bothered me it was the collection of disparate concerns in an attempt to provide an all-encompassing review of not just the product but the ‘switch.’ Not a big deal, and definitely of higher quality than the average submission here.
And where did I say anything about Dell machines in any of my posts? How about not confusing my posts with others. I did mention HP and Sun hardware in one of my posts.
I was trying to make a point about the quality of Western Digital’s products, obviously you didn’t get it.
The point about noise is relevant since Rayiner spends a significant amount of time discussing it. Some people go “over the top” about noise and considering the purpose and market for this machine, I am not surprised by the number of fans and the amount of noise it makes. If one of the things you are looking for in a machine is something that is quiet, then don’t buy a workstation class machine for the power then bitch about the noise.
I have seen better articles here as well, and this is not one of them.
Did you read his article where he specifically mentions the Dell computers in his apartment being quieter, or did we read different articles?
Maybe what you don’t get is that the quality of “Western Digital” makes no sense. The quality of models makes sense. OH MY GOD YOU HAVE A WESTERN DIGITAL? DON’T THEY MAKE THE RAPTOR? THAT MUST MEAN IT’S COMPARABLE TO THAT SEAGATE DRIVE IF NOT BETTER. Your argument is completely amusing. It’s the model of HDD he’s referring to as cheap, not the company. The company and its other products don’t matter. How do you refute his comments? By making some vague reference of knowing their history (wow, what a spectacular argument) and referring to a completely irrelevant product.
His point is that his competitive X2 machine is quieter. And commenting about the noise only bothers you because it’s negative. You’ve damaged your hearing with noise pollution, but that’s no big deal right? You don’t care, so why include it in a review.
It’s the model of HDD he’s referring to as cheap
If it wasn’t clear, the hard drive in the PowerMac is the 2500JS, a special cheap version of the 2500JD (to the tune of 10% or more, depending on supplier). That’s why I referred to it as “cheap”. It’s like using an ASRock motherboard instead of an Asus. Both are from the same company, but one is a cheaper (presumably lower-quality) model.
Also, from a silent computing standpoint, Western Digital, aside from their Raptor, is very decisively in the rear of the pack. Most people agree that Samsungs are the quietest, followed by Maxtor, followed by Seagate, followed (distantly), by Western Digital and Hitachi.
The WD2500JS is the special “SATA-II” JD. That is to say that the JS is a marginally improved revision of the JD with support for SATA-II . It is not WD’s premium model in the 7200rpm front, but it’s not a low-quality device. The WD800BD would be a “cheap” WD device. Performance wise it should be competitive with the Barracuda 7200.8. Since the JD was competitive noisewise with the 7200.8, I wouldn’t expect much difference from the JS except perhaps a marginally insignificant improvement. However without any benchmarking I’m forced to cede that I don’t know, because I don’t own a JS to test myself. I do own a pair of WD4000KDs which while produce noise, aren’t loud and are probably louder than the JS. Heck, the Raptor isn’t quiet and it’s still not loud-enough in my opinion to care.
I don’t think “cheap” nor noisy are fitting for the drive you obtained. The drive isn’t special in terms of speed, capacity, and probably not noise production. I agree firmly that it’s nothing special, but it’s probably not what you’re characterizing it as either.
You should definitely stop using Froogle to guage the “cheapness” of hardware. For one, Froogle is a piece of crap. For another, it’s not very useful for ascertaining any sort of technical information. Finally, don’t do it. It’s not even a good way to find a deal.
The WD2500JS is the special “SATA-II” JD.
Exactly, the WD2500JS is exactly the same as the JD version except it is a SATA-II interface.
Silent PC review recommends the WD2500JD in thier recommended hardrives list. http://www.silentpcreview.com/article29-page2.html
The seagate 7200.8 250GB didn’t make the list. Since Rayiner held the opinions of SPCR in such high esteem and used it to defend his point. I would like to see him sinivel out of this one.
I’m not trying to snivel out of anything. You’re trying to tell me that my impression of two drives that I own (that you don’t), in two machines that I own (that you don’t) are wrong. You can repeat this all you want, but at the end of the day, my Seagate is still quieter than the WD.
As for the recommend list you pointed out — it’s not entirely up-to-date, and it’s also the result of a limited sampling of drives. For example, the Maxtor DM10’s don’t make the list, even though they are widely reported to be very quiet and very fast (I didn’t buy one because I’ve had 2 Maxtors die on me). On the forums, several people have had good experiences with the 7200.8, and several people had whiny 2500JDs.
I’m not trying to snivel out of anything. You’re trying to tell me that my impression of two drives that I own (that you don’t), in two machines that I own (that you don’t) are wrong. You can repeat this all you want, but at the end of the day, my Seagate is still quieter than the WD.
Yes you are. I don’t give a damn what you think. Every piece of independant review and fact supports that the WD hardrive in the G5 is quieter than the seagate and equal or superior iin every respect. Even silent pc review recomends it . The same guys you consulted for you other components presumably.
You can hear anything in you want, voices in your head, see dead people. But doesn’t mean I should trust you over manufacturers specs and independant reviewers infact numerous ones that prove you wrong. You can claim all you want that standards don’t exist and manufacturer specs are wrong. But the fact reamins your biased attempts at writting a review were taken to task and you offered a very weak defence.
As for the recommend list you pointed out — it’s not entirely up-to-date, and it’s also the result of a limited sampling of drives. For example, the Maxtor DM10’s don’t make the list, even though they are widely reported to be very quiet and very fast (I didn’t buy one because I’ve had 2 Maxtors die on me). On the forums, several people have had good experiences with the 7200.8, and several people had whiny 2500JDs.
I don’t care the 7200.8 has been out for a while. On the forums many complained about the 7200.8 also and are waiting for the 7200.9. Also if you read the forums Seagate had o disable AAM and all thier drives are doing performance seeks. Where as the WD2500JS is set to AAM (softseek). You can read the review of the WD2500JD on the site and they did measurements.
I don’t have time nor the energy to copy paste snippets of forum posts porving my point because it won’t make a difference.
Name brand memory is what is is. Samsung is a name brand and a very respected OEM manufacturer. First you claimed that, samsung didn’t make DIMMS and the memory in the G5 had not markings. Now that your assertion was proven wrong you change the meaning of what you think Name Brand memory means.
It is the classic human response when some one is taken to task. Anger, Denial, Accpetance and Response. I think we have crossed the anger phase with you but the denial phase seems to never end.
Your conclusions in the article are garbage and are based on biased opinion that you tried to pass of as fact. That’s all I wanted to point out.
Being mistaken isn’t a crime. While I don’t necessarily expect that the JS is louder than the 7200.8 in any way that would personally bother me, I don’t have any actual measurements and I don’t know of anyone that does, which is why I mostly focused on addressing why I thought the arguments against his article in this thread were poorly constructed. I’m not going to take Seagate or WD’s measurements at face-value, and I’m not going to take Rayiner’s either but I’m not going to say that he’s lying. If someone wants to conduct actual measurements of the disks that would be helpful. Also there’s a slight problem with just measuring how loud the drives are, which is that different noises are more annoying than others. However, something is better than nothing.
My keyboard makes much more audible noise than any of my HDDs, and much more frequently at that. Actually that’s one of the reasons fan noise and optical drives bother me; they produce constant (when in use), often high-frequency background noise. Or really bad GPU fans that sound like leaf blowers. Those are pretty annoying. HDD clicking on the other hand is not that annoying, and I would have to stick my head next to a case in order to hear much else. That hasn’t always been the case; I’ve had some loud SCSI disks before.
Come to think of it, the optical drive is probably the most-irritating thing in my laptop. Now let the fighting over the optical drives begin.
It just occurred to me that you didn’t understand the function of the brackets in the made up quote I wrote. Brackets in a quote represent additions or substitutions to a quote to clarify meaning, which in my post I was using to point out that you were intentionally ignoring his comment about the pre-built Dells in order to dismiss his comments about noise on the grounds that he made the X2 himself. Since you didn’t say it yourself, I took the liberty of pointing it out with the brackets.
I found Rayiner’s tone for parts of the article almost whiny and very annoying. If he didn’t like the internals of the PowerMac, then why did he spend the money on it?
The OS that runs on it.
The noise issue is one that is no big deal for me, from time to time I have to go into the data center where there is plenty of noise.
Going into a data centre and using a desktop in an office or study is quite another. That’s the most in denial comment I’ve heard for a little while.
And there should be a difference between a machine he spent time and money on quieting and one “out of the box”, DUH!
The point is you pay money to Apple for them to do it for you. It should not be that noisy, plain and simple.
The comment about the “cheap” Western Digital drive, where the Hell did that come from? I checked PriceWatch and the Seagate drive he feels is “superior” is priced about the same as the Western Digital in the PowerMac.
The point being made was that some people believe that Apple uses higher quality components. From the Western Digital hard drive, as cheap as the same one you’d put in a normal PC, Rayiner is making the point that that isn’t true.
Obviously Rayiner is not aware of the history of Western Digital, or the quality of their products. Based on my experience I wouldn’t call them “cheap”, or better yet, ask anybody who uses a Raptor.
Quality of hard drives tends to jockey around. Western Digital drives have had more than their share of problems, Seagate work but they are extremely average, Maxtor were good but I’ve seen them fail more than once recently and the best at the moment in my opinion is Samsung.
For the money that you pay to Apple they should be looking out for these trends in good quality components and be sourcing from suppliers more appropriately. Apple need to buck up their ideas if they think they’re going to go off and sell a bog standard Intel PC in a slightly more pretty case at a large premium.
More interestingly, on the same platform we’re going to find out just how good Mac OS X is performance wise to Linux and Windows. Unfortunately, there’s no hiding place for Apple of “Oh, but it’s running on differet hardware”.
The software part of the article, well I wouldn’t use “pissed off” to describe anything in something that I would want published. That terminology is fine for conversation and blogging, not a “professional” piece.
There are so many people out there in the world who just cannot take bluntness for the life of them. Get used to it.
Rayiner needs to learn brevity, what he says could have been said in fewer words.
Yer. The PowerPC hardware as a desktop is crap overall and the only thing that would make anyone buy it is OS X as a desktop environment. OS X and PowerPC together as a server combination is suicide.
The point is you pay money to Apple for them to do it for you. It should not be that noisy, plain and simple.
It isn’t. I have a dualcore 2.0Ghz G5 and it is plainly much less louder than the Athlon box it replaced.
The noise issue is personal and a value judgement. Since Raynier didn’t post any mesaurements using an spl meter it is subjective at best.
“I found Rayiner’s tone for parts of the article almost whiny and very annoying. If he didn’t like the internals of the PowerMac, then why did he spend the money on it?”
I disagree, both with the title, and with the comment above. He already told you why he spent the money, and he was detailing what he liked, and what he didn’t like. That’s the purpose of the piece.
“The noise issue is one that is no big deal for me, from time to time I have to go into the data center where there is plenty of noise. I think he makes too much of a big deal about the noise of the PowerMac. And there should be a difference between a machine he spent time and money on quieting and one “out of the box”, DUH! Again, I am sure he had the opportunity to examine an operational machine before he spent his money. If he had a problem with noise it should have came up there (it would have for me). I have an Sun Enterprise 3000 at home and it sounds like a plane taking off, I got used to the noise.”
You seem defensive. He’s obviously not using it in a noisy data center. It’s there for your information. If you don’t like noise, it’s an issue. If you don’t mind, it’s not.
“The comment about the “cheap” Western Digital drive, where the Hell did that come from? I checked PriceWatch and the Seagate drive he feels is “superior” is priced about the same as the Western Digital in the PowerMac. Obviously Rayiner is not aware of the history of Western Digital, or the quality of their products. Based on my experience I wouldn’t call them “cheap”, or better yet, ask anybody who uses a Raptor. ”
Might be a good point there.
“The software part of the article, well I wouldn’t use “pissed off” to describe anything in something that I would want published. That terminology is fine for conversation and blogging, not a “professional” piece. His comments about Matlab should not have been included if it “sucks on every platform”, not just on the PowerMac, what’s the point? ”
He elaborated on that. I found his descriptions informative. That they were too informal for you is simply a matter of personal taste.
“During the time it has taken me to examine his article several people have commented positively about it, unfortunately I don’t agree. Rayiner needs to learn brevity, what he says could have been said in fewer words.”
And you had one decent criticism in five paragraphs. What’s good for the goose…
While you’re at it, you might want to read my follow up posts so you get the “whole picture” including my response to Rayiner.
You have the most annoying habit of writing arguments that were effectively rebutted by the original article. Either RTFA or shut the heck up.
Let’s look at a portion of your first post:
I found Rayiner’s tone for parts of the article almost whiny and very annoying. If he didn’t like the internals of the PowerMac, then why did he spend the money on it?
He already explained that very thoroughly. RTFA.
The noise issue is one that is no big deal for me, from time to time I have to go into the data center where there is plenty of noise.
He already said that you need to evaluate what’s important to you; these were his impressions. RTFA.
I think he makes too much of a big deal about the noise of the PowerMac.
Your personal opinion again, nimrod. RTFA.
And there should be a difference between a machine he spent time and money on quieting and one “out of the box”, DUH!
He already said that he had three P4 Dell machines in his apartment, and all of them were quieter than the PowerMac. RTFA.
Again, I am sure he had the opportunity to examine an operational machine before he spent his money. If he had a problem with noise it should have came up there (it would have for me).
What if he ordered online? Must everyone go into an Apple store to verify conventional wisdom (e.g. Macs are mroe quiet)? The article is about his experiences. RTFA.
I have an Sun Enterprise 3000 at home and it sounds like a plane taking off, I got used to the noise.
Again, that’s your experience. If you want to write an article about it, fine. But as Rayiner said, more and more people are becoming aware of this problem. RTFA.
The comment about the “cheap” Western Digital drive, where the Hell did that come from? I checked PriceWatch and the Seagate drive he feels is “superior” is priced about the same as the Western Digital in the PowerMac. Obviously Rayiner is not aware of the history of Western Digital, or the quality of their products. Based on my experience I wouldn’t call them “cheap”, or better yet, ask anybody who uses a Raptor.
Whatever. He made no direct comparison of the WD to the Seagate, so I don’t see where the “superior” claim is that you are somehow rebutting. The only thing he implied was superior about the Seagate drive was the noise level. RTFA.
Demanding that people to read your thread history is sooooo hypocritical when you’re just trolling like an Apple sycophant yourself.
Edited 2005-11-16 00:23
I don’t own a Mac, so the “Apple synchphant” comment is a waste of time. And if we were talking about my experience with Macs (MC68000 and 1st Generation PowerPC Macs), I would say they suck, but I haven’t used one since.
So before you start calling people names, what I said is my opinion and nothing you say is going to change my view of the article or its content.
…for preferring His mac and OS X over linux/GNOME. He seems to prefer almost every aspect of his X2 running Linux/Gnome except for a couple of software packages that run on OS X only.
The review seemed neatly organized into sections, but when I read each section I felt as though it was a little scattered and disorganized.
Still, his tests were good and I don’t have any arguments against his findings. I find my iMac G5 extremely noisy and I have heard from other mac users that the PowerPCs can be even noiser.
I too am a Mac lover. Don’t always know why, but I do love them, and OS X. I constantly battle in my head as to whether or not I should ditch Apple and go with a Shuttle and linux/bsd or wait for the Intel mini to come out (which I have been very impatiently waiting for since Apple announced the switch).
Overall not a bad article, just could have been better.
I just have a Mac Mini, but at office we have tons of iMac-Hair-dryers. But still, I don’t want to go back to Linux or windows. Never. OS X gives me something I have searched for too long and I don’t want to give that up. I am too much used to “it works” and I don’t have to worry about anything at all.
So much that I really think about spending tons of Money on a PowerMac
If the parts and case are “sounding” so bad, can you mod the Mac and bring it up to your standards? It would be nice to see dB changes with a before and after report and a parts list. (And I bet you could sell the HD on eBay after you upgrade your system.)
Oh, did you get AppleCare? We would’t want you to make the warranty go bye-bye because you “touched” your machine. Than again, maybe we would! ;^)
Now if only Linux had some decent software. Yes, yes I know there is a ton of software out there, but be honest, how much if it is professional quality? Is there a good office suite? I know there are several in beta, but are there any that are finished? Can you install them just by dragging and dropping them, or do you have to wade through library dependency hell every time you put a new app in? How about drivers?
As BeOS’ sad experience proved, the best OS in the world is useless and worthless if it does not have the applications that people want, and drivers for the hardware they can buy. Linux is a fine server, truly. I seldom see anything else anymore. But as a desktop, the professional quality applications *still* are mostly not there or are in eternal beta, and the library dependency problems make desktop Linux a tinkerer’s OS, not an OS for people who’d like to get something done. Is there a place for a tinkerer’s OS? Sure. If it gives you joy and/or satisfaction to make Linux do everything you need on your desk, rock on. Just not on my desk, please.
This exactly is why I got my first mac and OS X when Be folded. Yeah, fine, I don’t have fastest machine bragging rights. So what? My dual g5 and even my g3 ibook run OS X and they run professional quality applications on a professionally maintained OS, without me having to live with Windows.
Those of you who were in the BeOS club and went mainstream afterwards may remember what a breath of fresh air it was to be able to buy applications at CompUSA. That was my experience, anyway.
Let me reiterate. If you like tweaking, resolving library incompatibilities, and other fun and don’t mind dealing with apps that are designed by hobbyists for the most part, great. Linux is the way to go. And you’d be insane to run it on a mac for exactly the reasons this author gets into.
If you like the newest games and you don’t mind having an OS that has – let’s be generous – issues with security, and you don’t mind DLL hell when you install apps, maybe Windows is for you, and again, even if you could run it on a Mac (like, next year) you’d be nuts to do so.
I use a mac. I use it because I want to do work with it, I’m tired of tinkering. The mac runs my software. It runs it fast. I don’t *care* whether it’s faster than your Linux machine. Your linux machine will not run the software I want. My mac will. So I use a mac. This is my point.
Addendum. My dual G5 doesn’t break a sweat with 99% of the work it does. What good would a cpu that can idle faster do me?
-HH
Mac OS X/Darwin are limited in drivers to what Apple decides it will support as well. Also (and I don’t know this, I am asking) does Apple directly benefit from OpenSource on the driver issue? Or no?
Linux/BSD support a great range of hardware and it seems to grow everyday. Again, I am a mac user, I just cannot find many faults with how the OpenSource OS’s are coming along. To me they have reached the point where they are truly viable as desktop computers.
As a matter of fact, my mother uses my old Athlon box running Ubuntu. She was strictly a Windows user for 20 years. She picked up Ubuntu in one phone call (she lives 2700 miles from me, I left the computer in her spare bedroom when I moved). One phone call and I simply told her how the menu structure was laid out. She’s golden.
Your comments about Linux really come across as nothing more than FUD to me. Sure, there are a lot of distro’s out there that could fit your description of what using Linux is like, but my experience with the larger distro’s isn’t anything like what you portray.
Are there professional programs for Linux? You bet there are. That doesn’t mean that people don’t use other alternatives. Just because people decide to use an open source program in Linux it doesn’t mean that there aren’t proprietary alternatives. And, just because something is open source doesn’t make it inferior. Can Linux run any program ever made? No. Can OS X run any program ever made? No.
Can I install a program by dragging and dropping it? No. Do I care? No. Installation is no harder in Linux than in any other OS unless the designer of the program was too lazy to make it easy to install. Can your Mac keep track of practically every piece of software on your computer and automatically notify you of any upgrades or patches? No. Do you care? Apparently not.
You’re definitely right that Linux is good for tweaking. One of the things that I don’t like about Macs is that I’m not allowed to tweak as much as I want. I like the default options well enough for occasional use, but it isn’t quite my cup of tea.
As for drivers, I’ve never had to maintain or use my own Mac. So, I can’t really comment on how well they interface with upgrades or other hardware. That sort of thing has always been done by whoever owned the machine I was using. I can say that drivers have rarely crossed my mind with Linux because everything is so well detected and automatically setup.
I don’t begrudge anyone liking one product over another, but I really don’t like when someone makes overly generalized, inaccurate comments. Your impression of Linux may have been accurate at one time. It isn’t now.
Edited 2005-11-15 19:50
Professional – Someone who gets paid to do something.
Amateur – Someone who loves to do something.
I prefer amateur software, thank you . It’s always late, sometimes never “ready” but they consistently fail to lie to me and promise features that end up broken.
Now, for the rest of your sad “argument:”
Library dependency problems are not problems. They’re solutions that work just fine. There are c++ library compatibility problems against major upgrades in gcc, yes. However, these are entirely solved by the fact that FOSS is distributed as source, not binaries. As for closed programs, usually they offer multiple builds; it’s not that hard. And typing gcc –version to find which to download is beyond no one. The biggest problem I’ve seen with *nix closed software has been license managers, but that’s intentional.
“Can you install them just by dragging and dropping them, or do you have to wade through library dependency hell every time you put a new app in?”
Autopackage install: chmod +x package; ./package
–or–
sh package
–or–
right click, select properties. Go to permissions and set executable. Double, or single if in konq, click.
“How about drivers? ”
Drivers in Linux work extremely well. I’ve never seen anything do better, well except maybe BSD’s (I’ve heard NetBSD’s wifi support is better now). Do you know how I install my drivers? I don’t, they came on my system.
BeOS is a sad story. Maybe you should get off FOSS back and help out the Haiku project. You wanna see apps on BeOS, there’s your chance…
A lot of games sell on Mac. I mean a lot. Have you checked out the Mac game isle at CompUSA? That could keep anyone who’s not a hardcore gamer happy for years… I can’t run all those games on my Windows load (because of my hardware )!
What applications are you so missing that FOSS doesn’t provide? I know there are plenty, but what is it specifically that you sorely need? And why do you have to downplay the work amateurs do just to make yourself happy with your Mac decision?
“Professional – Someone who gets paid to do something.
Amateur – Someone who loves to do something. ”
True but sometimes people who get paid to do something love it as well.
I use a Mac. I also use Linux. Every day, I find that I can get more and more work done on it. Every week, I find I’ve used my Mac just a little less than before, and my Linux desktop a little more.
Eternal beta? That’s a good thing. It means always improving. Too much tinkering? Yeah, that’s still a pain sometimes. But it also means I can find ways to do things on Linux that I probably couldn’t have, or at least not so easily, on the Mac.
I agree with the author. I love OS X. I prefer KDE to GNOME, but I see both desktops rapidly approaching Mac quality, having left Windows behind a couple of years ago. I won’t buy another Mac for myself — when Apple dropped Appleworks on its most recent line, and when Open Office hit 2.0, it became a no-brainer. Until then, there are still some Mac applications I still use, but none that are virtually irreplaceable by Linux.
Unless, I am mistaken, he was just trying to share his impressions after buying and using a new dual-core PowerMac. I feel that is what he did, and while he does show some biases, he also states up front that readers should come to their own conclusions.
I don’t understand why the writer should be criticized because noise is a concern to him. If you don’t have a problem with noise, ignore his comments, and skip that part of the article. IMO anything that can be made by someone in there basement can be compared to something made by a large company. Why is it unreasonable to ask a manufacturer to spend some attention in the design stages to a concern that their customer might have? The initial design costs of a product are subsidized by the eventual volume shipments of that product.
Because noise is subjective, what is quiet for one is noisy for another. I have a Pentium IV rig on my desk and I don’t hear it at all. This machine has no special quieting technology added to it.
If you want a powerful computer at your desk, there are some compromizes you have to make, and noise is one of them.
My single processor PowerMac is relatively quiet most of the time. But if the OS loses control of the fan system, or the thing gets hot, it’s louder than an 11,000 BTU air conditioner in the opposite window. It puts out a very noticeable breeze.
There are hundreds of complaints on the Apple discussion boards about fans going haywire after the last OS upgrade. You’d be hard put to find an Intel or AMD system that could get as loud as a PowerMac, without getting a decent-sized server.
If you want a powerful computer at your desk, there are some compromizes you have to make, and noise is one of them.
Apparently not, according to the article.
“If you want a powerful computer at your desk, there are some compromizes you have to make, and noise is one of them.”
I think the author demonstrated otherwise in his comparisons, where the computer that scored consistently best on performance was also the quietest.
I have 12 pc-s around and i can’t work without a music in the headphones. I really hate the noise. My home athlon64 is also very loud
Run it from the command line using the -nodesktop option.
Seems much snappier to me and you can use your favourite editor (clearly emacs!) and the much more slimline Apple terminal.
This is a must, in my opinion. Everything GUI with MATLAB on OS X is just excruciatingly slow, especially all the elements that use Java. Actually, come to think of it, MATLAB on OS X pretty much sucks in all respects compared to Windows and Linux. This isn’t a critique of Apple but rather Mathworks, who despite charging a multitude of arms and legs for their product can’t seem to put out a decent application.
Personally, I use Emacs + Terminal for MATLAB myself and find it a nice combination, though a lot of the features of MATLAB are lost that way. (Also, I use a modified matlab.el file for better syntax highlighting.)
The article was okay, yes his bias does kick through but so do all of ours right?
Some of the “i dont like that” are basically personal preferences. The G5 chip is a very hot chip and that why it is so “noisey”.
Thankfully when Apple do the intel switch the noise issue shouldnt be a problem. I have to say in my experience on windows/linux comps vs. macs i have to say macs have been far quieter, but hey that is my own experience. Also noise is not an issue for me. I personally find my mac hardware very good quality, its true apple do make a lot more money on their comps than most people, but that is why they are in a very healthy situation today. With a mac you are paying for an all in one solution that just works. With my mac i can find i can get on with my stuff without the OS getting in the way. With the mac its more than a computer is a “just works” experience. For hassle free computering im glad i payed extra for the apple “premium”.
The one of the fustrating things i found with Linux was there was too much tinkering. It is a hobby OS. I know some die-hard linux fan will get offended by that but its true. I likes Linux for the years i used it and learnt way more about OSes thanks to it. With Apple however, i can get a fully fledge BSD system with a fantastic GUI on the top.
Overall the article was good and a good read, i found some of his comments good and it had a proffessional feel to the article. On the other hand, like all of us, his biases did kick through a bit. But if you read my post you can see my bias as well. At end of the day i have used most versions of windows and too many linux distro’s to count yet i prefer OS X. See thats my bias. But good article none the less even if i do disagree over a few issues.
Good article, went into many details in reasonable depth.
It’s good to finally see some good cross platform benchmarks.
SPEC is also about “real life” software but the submissions are tweaked to the Nth degree by the manufacturers and sometimes include libraries worth more than the machine itself!
Rayiner’s benchmarks are probably more representative of what a user will experience, although Apple’s own apps should be optimised for their own hardware and should thus do better. It’s clear the G5 can do well but it needs work in many cases, work the vast majority of users will not perform. Compilers are (finally) getting better though.
Some of his annoyances can be fixed, files opening in random programs can be highly annoying but you can force all types of a certain file type to open in a specific program by changing the setting in the “Get info” window, thereafter it’s no problem. I suspect the HTML editing issue is similar.
Yea. However, I remember watching someone try and do a fair intel to mac comparison by using the absoft (I think) compilers for ppc and the intel compilers for the xeon… Yea, too bad the code was in fortran and no one there knew fortran, might have helped in getting it to compile on something other than gcc!
I think the hardware for powermacs is pretty competitive for what people buy it for. It’s not a gaming rig, but it can play most games (you’re just spending extra money on stuff you don’t need, like dual processors).
If you want to use a Mac, you should be using it because you want OS X, or *maybe* because you think those notebooks are so pretty. But the day when you bought them to get “on power” is gone. But so is the day of MacOS Classic, so you win some and you lose some .
The solution: OSX on an X2?
Who said Apple isn’t expensive?
From a technological standpoint I couldn’t agree more with you. I’d very much like to see Apple to partner with AMD rather than Intel. However, business-wise it makes sense in cuddling up next to the big player as Intel has enough cash to buy Apple’s hardware/computerdivision if things go bad. AMD doesn’t.
Hi.
I think this is a very good, professional article. Many times I have seen comparisons that were not comparisons at all.
OSX is a great value per se, but go through the Apple forums across the net to learn that their hardware is not free of glitches. At that price I say it should be.
Maybe I buy myself a Mac Mini soon, because I need Garageband and iMovie to speed up my hobbies, that’s when Apple comes handy. If I really need those apps (no counterpart on Linux or XP) and something robust, well why not? I hope that Apple-Intel marriage will give the line a boost in terms of speed. Today a Pentium M @ 1.6 GHz outperforms iBooks and Powerbooks, cost less even if it’s less cool. Like Appleinsider was writing recently, maybe Intel Macs in the next January. Problem is what to update first? The first one model to go intel will certanely embarass the others. Should be done in block for all G4 Lines, otherwise it will be a drawback.
I would personally not use garageband on a mini. I have a powerbook G4 1.33 and Garageband has trouble when I try to realtime mix more then three or four instrument tracks.
Mini’s are mostly based on laptop hardware.
Hi,
Sorry to be OT but in the Apple forumes they suggested me Mac Mini or iBook 12″/14″ for GB, provided that I buy and external 7200RPM drive (FW).
BTW, I have recently tested an iBook 12″, I was able to make 11 Tracks (9 Real Instruments, using loops + 2 Software Instruments)- 75% CPU. This goes really close to my XP 1.6Ghz Centrino Tecra M2 running Cubase : 16 Audio Tracks + effects (2 Compressors, 3 Reverb, 1 Flanger) + 2 Vsti – Cubase Virtual Instruments, Drums and VB1 (Virtual Bass) – 40% CPU.
I know that Software tracks can be frozen thus transforming them in audio tracks, giving CPU more free cycles. So in your opinion what machine should I buy?
I cannot say. I can only say that you really should get a 7200 rpm drive. I have a 4500? Or maybe a 5400 and it simply doesn’t perform with garageband. But, if you say you have tested it, do not let my opinion prevent you from buying it. My own experiences on my powerbook simply aren’t too good but I have a slow drive.
I had a very brief tesing in a store, no chance to record anything only created an 8 bar song like described.
That’s why I’m asking confirmation to people with real experiences. Anyway here on OSnews has been posted an article about new iBook in January…..what to think?
what to do? Such a messy situation!
Still disagree with you on the fonts though. Why don’t you up that tahoma to 11 or something.
It looks like your central theme is that the switch to Intel is a good idea.
A good article overall. Also, Rayiner hit the nail on its head when he talked about fonts: OSX rendering is less distorted (more faithful to the design ideal) but more fuzzy; WinXP’s Cleartype is sharper but more distorted; and Freetype’s autohinting is a good compromise—for fonts that were designed with autohinting in mind, such as the Vera family.
[I use Suse 10.0 at work, I have a Powerbook laptop, and my wife has a Dell laptop running WinXP, so I have plenty of opportunities to compare the three platforms.]
However, I’d like to add that Freetype’s autohinting does not quite cut it with fonts, such as Times New Roman and, to some extent, Arial, that are designed to perform significant distortions (or transformations) in the Truetype bytecode depending on point size. Clearly, the autohinter cannot pick up these transformations, which results in font shapes and sizes that are quite a ways off from the intended design. This shows up markedly in web pages designed for specific point sizes (e.g. CNN.com). If the bytecode interpreter is turned on, subpixel antialiasing looks quite fuzzy. The OSX font renderer tries to strike a different balance, and does make use of the Truetype bytecode.
M
I think I’ll break my silence for this article, I really liked some parts of it.
Perhaps what I liked the most was the comparisson of the hardware. Some mac fans like to say that everything about Macs is top of the line, I’ve seen non-stop propaganda about the hardware being good and often modded down comments in reply stating that for the most part the hardware was commodity parts. Rainier was one of those people often moderated down for speaking up about the Apple hardware being less than amazing if I remember correctly, and now he has done an excellent job of backing up what he’s been saying.
If there was one thing I didn’t like, it’s that the comparissons were to Gnome which I myself don’t like enough to use.
Overall it’s a very good, and most importantly a brutally honest article. I would like to see a contrasting article for comparisson though since this one stripped the polish right off of those expensive Macs for me.
This was very nice, factual, reasonable and above all, detailed enough to give you real insight into what the tradeoffs are. Thank you for this.
The reason for talking at length about the noise issue is because the G5, being designed in the wake of the MMD “Wind Tunnel” fiasco, is supposed to be a queit computer. I’ve heard Mac fans, numerous times, say something along the lines of “enjoy your PC with its jet-turbine fans!” Objectively, the G5 is not a quiet machine. Using my X2 as a relative scale, I’d say its on the order of 40db-50db. 30db is usually considered quiet, and 20db is the cut-off for silent. Your experiences in a datacenter are irrelevent here. This is a workstation, not a server. It’s not going to sit in a room all by itself. It’s going to be sitting a few feet from the user.
Your point about compromises is well taken, but compromises aren’t the limiting factor here. The 970MP reportedly maxes out at 100W, the same as the X2 I used for comparison. It gets by just fine with a single, low-flow 120MM fan on a large heatsink.
1) Change the heatsink dimensions slightly. Make it a bit deeper (into the case), and a bit narrower (front of case to back of case). Decreaes the fin dimensions; surface area isn’t the bottleneck here.
2) Replace the 4 80mm fans in the CPU tunnel with two 120mm fans pulling air out the back of the case. Push-pull is usually not very helpful anyway (noise doubles but flow volume increases only marginally), and with the reduced fin density, it won’t be necessary. Run the 120mm fans at 1/2 the speed of the old set. It’s hard to tell what size the current fans are, but if they’re 80mm, then doing this will keep about the same airflow rate, while more than halving the noise. It’ll move the noise to a lower frequency (less annoying) range to boot.
3) Forget about the dedicated cooling section for the drives. The PowerMac can only hold two harddrives and an optical drive. There is no way they’ll kick out more than 30W of heat. Instead, merge the PCI section with the drive bay section and use a single large 120MM fan to pull air out of it.
None of these things are all that novel. I’m sitting at a 3GHz P4 Dell system right now (in a lab), and it dissipates about 100W, and is noticibly quieter than the PowerMac. Of course, Dell has always been good about engineering their cases to be quiet (something Mac folks will never admit).
I spend most of my day working with various Sun and HP/Compaq servers, so for me cooling is important even if it raises the ambient noise level. I would rather have cool CPU’s and memory and a noisy machine than to have a whisper quiet box that everything is burning up (I had an HP 9000/712 at a previous job that stopped working because the disks were too hot). That machine was very quiet.
Look at what can be put into a dual core PowerMac and I can plainly see why it has all the cooling:
http://www.apple.com/powermac/specs.html
Think of your machine maxed out and then think of what it would take to cool it. Apple’s route was to provide more cooling (for those who would max the machine out) than to provide too little cooling.
The CPU in my X2 stays below 55C loaded, well under AMD’s 70C guideline. The fanless 6600GT stays 10C cooler (just due to an intake fan 18 inches away) then it does with the annoyingly loud stock cooler.
Again, I’m not talking about reducing cooling capacity. I’m talking about being smarter about how to provide that cooling capacity. The G5 case is big and roomy. There is no reason Apple couldn’t have used 120mm fans throughout. That alone would have reduced noise significantly.
If you’re into reviewing apples and oranges, that is…
BTW, check out the Rating breakdown. The admins all rated it as a 10, raising the rating from an 8.74 (without them) to an 8.94. Should admins really be voting on stories?
Edited 2005-11-15 20:42
Argument in favour: Admins got opinions too and are entitled to them.
Argument against: In an ideal world, moderators/admins should remain neutral.
Take a pick. Personally I think they should refrain from voting, but understand that they also want to give a good article thumbs up in order to encourage the author and others to write good, quality articles.
However, software is what counts and OS X is the platform that runs the applications I want to run. For that reason alone, the PowerMac isn’t going anywhere any time soon.
Basically, if the applications were on the PC, he’d have dumped the G5 box. This of course brings us back to that question of – with systems like the X2 system in the article, why is Apple going with Intel instead of AMD? Take the author’s X2 system, add 20% to it for Apple’s markup, and you’ve still got something far superior to any Intel system that will be available in the next two years. Intel isn’t close to releasing anything competitive with the AMD64 offerings.
Yes, the Pentium M derived chips Intel is supposed to have next year would make excellent iBooks/PowerBooks, but Apple still could have gone with AMD64s for their desktop units. Well, maybe next Nov he’ll have another article comparing the Apple/Intel system to an X2 running Ubuntu. It would be interesting to see how things changed.
I think Apple wants to deal with one producer for the discounts…. And AMD doesn’t yet produce a decent mobile chip, unfortunately.
I agree though, it’d be much nicer to see dual opteron’s as the powermac line…
Now I wonder, will they still be called powermacs? Mactel isn’t very sexy, I wouldn’t buy that….
Apple isn’t buying CPUs from Intel, it’s buying Centrino platforms. These are an integrated, low-cost solution comprising decent CPUs, built-in sound, built-in wireless (wi-fi and bluetooth) built-in networking and optionally built-in graphics.
That kind of solution is ideal for iBooks, Powerbooks and Mac Minis. Also it’s worth remembering that Apple is still pretty much a 32-bit company. The iBooks, Powerbooks and Mac Minis still use the 32-bit G4, and very little of OS X or Mac software takes advantage of 64-bit computing. As a result, the AMD64 ISA (or EM64T as Intel would have you call it) simply isn’t a factor.
I had a handful of issues with Mr. Hashem’s article. Whole most of the article was well written, the fact that it felt inclined to play upon some old Mac stereotypes which are often used by some PC users to justify their purchase caused me to give the article a 1 rating.
Whether anyone agrees with me or not is beside the point. That’s my opinion and I’m entitled to it. I feel that I can justify my vote but I certainly shouldn’t have to.
I just thought that all of you would like to read the response I just recived from Eugenia for offering my vote:
—
Edit by Thom: Removed: personal content/conversation was posted without permission.
Edited 2005-11-15 21:24
Kudos, Hashem!
In the past i have disagreed with you, but this article is your opinion, and as such it should be respected. Your analysis was measured and to the point. Your grammar and spelling was faultless, something rare in this environment.
Thanks for the well thought article.
I recently purchased an AMD 3800+ x2 based system using an ATI EXPRESS 200 based motherboard.
It was quite cheap and it is very, very fast, In many ways I am pleased with it.
Yet I was suprised to find it can only run Windows.
I have tried several versions of Linux including SUSE Professional, Knoppix 4 and Ubuntu – of these only Knoppix ever boots (in vesa mode, with no network).
Even VMWARE5 does not work on this PC, it crashes when writing to the virtual disk!
I dont know how much trouble (or how many weeks it took) for the Author of this article to get Linux running on his AMD64 PC, perhaps I am just unlucky.
I installed Fedora Core 4 for AMD64 without problems. It’s stable and works great – nothing broken except some third party wireless drivers (ACX100). In fact, my dynamic CPU frequency adjustment seems to be broken on the i386 version of the same distro, so I’ve broken even. I suspect your ATI Express chipset may be causing problems, not your AMD64 CPU.
Hmm, that’s interesting. It sounds like one of the systems I’ve been considering to get. That said, it _must_ be able to run linux for me for me to even consider purchasing it (mind you I’d look to replacing the ati with something nvidia instead). What’s make and model of it so I know?
This PC has an ECS RS482-M motherboard, that uses the ATI Express 200 chipset. I am not saying that Linux wont run on this board – just that none of the common standard Linux systems that I have lying around seems to run on it.
When you see web sites saying that ATI have linux drivers for the express 200 chipset all they mean is that ATI have released video drivers for the embedded graphics card.
The main problem is that Linux can not find the right IRQs for the devices on this board, also the clock runs at a stupid high speed rate causing crashes.
I get PCI errors on the network card.
I have never been able to get the network card working when booting Linux native on this PC.
see http://www.ecs.com.tw/
The board seems to run XP ok, XP benefits from Windows drivers for this board.
“Let me reiterate. If you like tweaking, resolving library incompatibilities, and other fun and don’t mind dealing with apps that are designed by hobbyists for the most part, great.”
I don’t know what it is with mac people, they seem to need to trash everything else without knowing anything about it. Just install Mandriva, Suse, Fedora, Ubuntu, PCLinux, and this is just obviously false. Do you think OO is designed by hobbyists? KOffice? Emacs? Gedit? K3b? Kate? Firefox? These are just the things I use all the time, they, in the famous phrase, ‘just work’. There’s tons more.
What you are describing about dependencies was an issue several years ago. And still, once in a while, you need to install a library to get something working. Happened to me the other day on a machine I am supporting, with Tellico, another great, free “hobbyist” package. Its rare enough I can’t recall the last time, and it was over in about two minutes.
Need to realise that making up nonsense about alternatives is not going to convince anyone of anything, except the craziness of the whole bunch of advocates. If you want to turn people on to your platform, instead of off, try to get real about the alternatives. Failing to do this is what nearly sank Apple, back in the nineties, and its dissapointing to see it continuing in its adherents in the next century.
Among other things, I used the following:
Antec SLK-3000B Case
Antec Tri-Cool 120mm Case Fans x2
AMD64 3000+ CPU
Artic Cooling CPU Cooler
Coolmax 550W PSU (1 120mm Fan)
nVidia GeForce 6200TC (passively cooled)
Seagate Barracuda SATA HDD
Cool ‘n Quiet is enabled in the BIOS. With the system running under load and this part of the house very quiet, I can put my ear against the case and not be able to tell whether the system is on or off. Now _that_ is silent. The fans for the PSU and CPU run at around 1000 RPM and the case fans at 1200 RPM IIRC. Idle temps. are around 36/37 degrees Celsius for CPU and motherboard.
With Cool ‘n Quiet enabled, the CPU runs at only 1 GHz, but it will switch to top speed when needed. Even during a demanding operation like doing a large system update or loading a complex application, the CPU spends much of its time in power-saving mode due to waiting on all the disk activity.
AFAIK, neither the P4 nor the G5 have a comparable feature on the desktop right now, and that may be one reason why the author’s system is much quieter. By contrast, my old P4 with a similar cooling setup runs hot even when idle, and its motherboard almost overheats under heavy load.
I enjoyed reading this review. It’s quite refreshing to see an honest appraisal of a product that isn’t afraid to be critical.
One thing I’ve always disliked about reviews of Apple products is the feeling that the reviewer is evangelizing the product first – an actual, honest review being of secondary or low importance.
I think this is the best review article I have read in a while. Out of all the Mac vs x86 reviews I have read this is the most scientific and I knew all along that the PPC is just as good as the AMD architecture…some wins and some losses. But overall they seem to be heat to head. Great article.
G4 and G5: Processor Performance: Automatic
You can knitpick all you want about minor variations in hardware, fan noise, interface and software preferences and the like. You can even write long-winded explanations about why YOU feel that way, but when reality smacks you upside the head, you will see that there is no other desktop system today that can touch the Mac when it comes to providing a superior user experience. None. Period.
Edited 2005-11-15 23:46
Just wanted to chime in, this is certainly among the better mac reviews I’ve seen. As to the noise issue, that would indeed be a showstopper for me. I have a 15″ powerbook running Tiger, and the fan noise on it is horrible. For goodness sake, this dell latitude d610 next to me, which is a pretty crappy laptop, is way more quiet that the PB, even under heavy loads.
As to the OSX vs Linux usability comparison, I have to say, I’m certainly more in the Linux camp. Yes, even from a UI perspective. Some things on OSX are clearly better (excellent transparency capability for instance and perhaps general prettiness (if aqua is your thing)), but overall I find the various desktop setups for Linux to simply be more intuitive, and functional. It’s not necessarily that OSX is so bad, it’s just that, overall, I find linux (or more precisely gnome/kde/xfce/etc) to be more usable.
Also, as others have noted, the ability to update nearly your entire system and all apps from one location, or just a simple apt-get dist-upgrade is miles ahead of what OSX can do.
Going by comments that I see sometimes in this matter, I think perhaps some people may have gotten burned by trying Linux out say 4-5 years ago when clearly there was much to be desired on the desktop space, and gave up then. Seriously though, it’s a whole different game now, people really aught to give it a new try.
I have been using the latest Ubuntu as well as Tiger and there simply is no comparison. This guy can say Ubuntu (Gnome) is in the same league if he wants but he is out of his mind. Also as for the components Apple uses, all I can say is the Macs that I have owned and own today last far, far longer than their PC brethren. When I am ready to take one of my PC’s to the scrap heap the Mac sitting next to it that is 5 years old and counting is still chugging away just fine. My dual G4 tower that I have now is a master piece of engineering. It still has the best case design I have ever seen on a computer and the layout and components inside are top notch. I can only speak from having many PC’s and many Macs and seeing how each performs daily in the real world and quite frankly there is no comparison. The Apple machines last longer and retain their value longer than their PC counterparts.
“The Apple machines last longer and retain their value longer than their PC counterparts.”
You’re not comparing apples to apples here. Apple makes computers, “PC counterparts” is a generic term. Are you saying that Applet computers are superior to counterparts from Dell, for example? Or that they are superior to a system a friend of yours put together from newegg.com?
Apple does not make superior products to everyone else. What they’re better is in integrating the OS and the hardware into a piece that works, but the hardware is inferior to many other products out there.
Your purchasing and usage habits have nothing to do with the quality of the hardware Apple uses. To put it in perspective I have a PC with 286 from 1988 that still works fine. I could have it disposed of tomorrow if I wanted, or I could plug it in and remind myself of how irritating using that computer was when I was a little kid. You’d probably toss it out and put up a tick mark next to your 5-year old Mac. In total I have 23 (two are Macs) computers all in functioning condition ranging in acquisition from ’88 to the one I put together three weeks ago. Do I get a gold star? Does that make a motherboard out there with low-quality capacitors better or worse? Does it change what hardware Apple puts in the modern Mac? (Hint: it changes nothing)
The quality of the hardware of a computer can only be assessed by the quality of the hardware in a computer. It doesn’t generalize. It doesn’t carry over from model to model because of the brand. Your anecdotal experience and my anecdotal experience over a range of hardware over many years doesn’t change the quality of the HDD or RAM that came in my iMac (neither was the best when I bought it) or the PowerMac. Pretending that Apple’s hardware is of objectively superior quality when most of it consists of stock, commodity hardware that many other OEMs use only serves to make your purchasing decisions less sensible and make yourself appear biased.
Yeah, there are really bad PC OEMs out there and if you buy from them, the chances are pretty good you’ll get a shitty computer. Well, so what? How does that make the quality of the hardware selected by Apple top of the line when it isn’t?
Edited 2005-11-16 00:51
The PowerMac G5 is a frustrating machine. As a piece of hardware, it is inferior in almost every way to the Athlon X2 machine sitting next to it. That is not to say that it is a bad piece of hardware, but rather that it can’t match up to a very excellent machine.
I strongly disagree . The G5 has a memory controller which does ECC the X2’s doesn’t. The G5 can support a maximum of 16GB where are the X2’s DFI lanparty only 4GB.
The hard drive in my PowerMac is, as I mentioned, a cheap Western Digital unit. If you type “250GB SATA” into Froogle, it is the unit that comes up at the very bottom of the price list.
The seagate drive on a google search is in the $110-$163 . The western digital $117.81 … $172.66.
http://www.google.com/search?q=Western+Digital+WD2500JS+250GB+SATA&…
http://www.google.com/search?q=Seagate+7200.8+250GB+SATA&start=0&st…
It was a well written article but too much bias and not much research.
If you read the article carefully you would realise he is not dissing the G5 CPU or MOBO. Those thing he liked. It was the excessive use of FANS, noisier HDD, Generic RAM, soft keyboard and mouse and acoustic design of the case.
Whereas the components in the X2, except for the motherboard were quality items handpicked by himself.
Take a deep breath, he doesn’t hate his G5.
If you read the article carefully you would realise he is not dissing the G5 CPU or MOBO. Those thing he liked. It was the excessive use of FANS, noisier HDD, Generic RAM, soft keyboard and mouse and acoustic design of the case.
I am sorry but hardware in a PC means the electronics first and fore most. Quality of the fans is subjective. The memory in my G5 is samsung, I’d hardly call that generic. Samsung is one of the leading supliers of memory chips.
Whereas the components in the X2, except for the motherboard were quality items handpicked by himself.
You hit the nail on the head. The operative words being “hand picked”.
I don;t care if he likes or hates his G5. But the X2 he has is not a very superior peice of hardware because it has hand picked fans.
Since when did noise become a sole factor in determining HDD quailty. He did a massivle cpu benchmarks but failed to do any reasonable I/O benchmark. Why?
Here are the acoustic specs from WD’s site for the WD2500JS.
Acoustics (average)
Idle Mode 28 dBA
Seek Mode 0 33dBA
Seek Mode 3 29 dBA
Here is the acoustic information from Seagate on the 7200.8
Acoustics
Idle (bels—sound power) ?2.8 / 28db
Quiet Seek (bels—sound power) ?3.2 / 32db
Performance Seek (bels—sound power) ?3.7 /37db
As you can see from the above specs the WD2500JS is quieter or as quiet as the seagate 7200.8 250 GB.
If Rayiner claims he can hear the difference he is lying. Human hearing isn’t that sensitive.
I am only pointing out the parts of the review which are too filled with opinion to be taken seriously. Some are flat out false.
“As you can see from the above specs the WD2500JS is quieter or as quiet as the seagate 7200.8 250 GB. ”
You believe those specs, and you think they’re comparable? Haha.
“If Rayiner claims he can hear the difference he is lying. Human hearing isn’t that sensitive. ”
Yes, he’s lying. The point of this article was to lie about noise leves on Seagate and WD hd’s. Right on.
I am sorry but hardware in a PC means the electronics first and fore most.
I thought Macs were about the whole package?
Quality of the fans is subjective.
Subjective does not mean incomparable. Evaluation subjective things just requires a good evaluation function, and an acceptance of the fact that the result is only as applicable as the original function. Conveniently, most people tend to perceive noises in the same way. In the context of fans, mechanical noises are perceived as more offensive than airflow noises. Smooth, consistent noises are perceived as less distracting than intermittent noises. Higher-pitched noises are considered more irritating than lower-pitched ones. By these objective measures, I can make a subjective judgement that the G5’s fans are lower in quality than the Yate Loons in my X2.
The memory in my G5 is samsung, I’d hardly call that generic. Samsung is one of the leading supliers of memory chips.
Samsung supplies memory chips, they don’t make DIMMs. Who made the DIMMs in the G5? Don’t know? That’s why they are called generic! FYI, most generic RAM uses name-brand chips (Infineon, Hynix, Micron, etc). It’s the actual PCBs and surrounding circuitry that differentiates generic sticks from name-brand ones.
You hit the nail on the head. The operative words being “hand picked”.
And the parts-list for the G5 was generated by a computer?
Since when did noise become a sole factor in determining HDD quality?
Since hard-drives in the same market category ceased to have interesting performance differences?
He did a massivle cpu benchmarks but failed to do any reasonable I/O benchmark. Why?
Let’s look at a massive hard drive benchmark. From Tom’s Hardware’s Sept. 2005 Roundup: http://www.tomshardware.com/storage/20050927/hd_round_up-36.html
After throughly benchmarking 22 hard drives, you know what his conclusion was? “Current drives are fast drives”. He further goes on to say: “And while performance differences from one drive generation to the next may be measurable, only in a few isolated cases do increases make a perceptible difference to users.” If the performance difference between generations is negligible, what does that make the performance difference within a generation? You guessed it: immaterial!
[Immaterial Vendor Specs Clipped]
Now, let’s consider the vendor specs. First, they don’t matter. They are usually made up anyway. Second, even if they weren’t made up, your data points don’t prove anything. Mysteriously absent anywhere in that little clip are the distances at which the drives are measured. Sound power decreases with the inverse square of the distance. Therefore, something at 3m will be 1/9 as loud as the same thing at 1m. In terms of measurements, something 37db at 3m will be 28db at 1m. Therefore, without knowing the distances in question, you can’t compare the two numbers at all.
If Rayiner claims he can hear the difference he is lying. Human hearing isn’t that sensitive.
The Just Noticeable Difference (JND) of the human ear is a complex function, but for regular-volume noises, 1db is a decent figure. Therefore, the difference between 28db and 29db is noticeable, and the difference between 37db and 33db is very noticeable. Of course, it should be, since a 37db noise is well over twice as loud as a 33db noise!
I thought Macs were about the whole package?
Yes they are but I was talking about hardware and the point I was responding to in the article, so were you. So please pay attention and stay on topic.
Samsung supplies memory chips, they don’t make DIMMs. Who made the DIMMs in the G5? Don’t know? That’s why they are called generic! FYI, most generic RAM uses name-brand chips (Infineon, Hynix, Micron, etc). It’s the actual PCBs and surrounding circuitry that differentiates generic sticks from name-brand ones.
Really, you must be blind and quite ignorant. The module on my G5 is
samsung m378t335cz3-cd5. 256mb pc2-4200u-444-12-c3. It even has a blue samsung logo on it and looks something like this
http://www.newegg.com/Product/ShowImage.asp?Image=20-147-147-01.jpg…)%20%20Unbuffered%20System%20Memory%20Model%20 M378T2953CZ3-CD5
Newegg even sells samsung dimms.
http://www.newegg.com/Product/ProductList.asp?Submit=Suggested&Desc…
[Immaterial Vendor Specs Clipped]
Why because they prove you wrong? Convenient. Did you measure the noise levels of the two drives and get real data to counteract the published specs.
Now, let’s consider the vendor specs. First, they don’t matter. They are usually made up anyway.
Really and I should take your word for it. I have actually been to my companies anechoic chamber to measure the fan output levels for a workstation.
Second, even if they weren’t made up, your data points don’t prove anything. Mysteriously absent anywhere in that little clip are the distances at which the drives are measured.
Paranoid today aren’t we. The measurements are verbatim what the manufacturer’s claim. I converted the bels to decibels for the seagate.
Seagate:
http://www.seagate.com/docs/pdf/datasheet/disc/ds_barracuda7200.8.p…
Wdc:
http://wdc.custhelp.com/cgi-bin/wdc.cfg/php/enduser/std_adp.php?p_f…
Sound power decreases with the inverse square of the distance. Therefore, something at 3m will be 1/9 as loud as the same thing at 1m. In terms of measurements, something 37db at 3m will be 28db at 1m. Therefore, without knowing the distances in question, you can’t compare the two numbers at all.
But I should take your ears to be more accurate. Please!
The Just Noticeable Difference (JND) of the human ear is a complex function, but for regular-volume noises, 1db is a decent figure. Therefore, the difference between 28db and 29db is noticeable, and the difference between 37db and 33db is very noticeable. Of course, it should be, since a 37db noise is well over twice as loud as a 33db noise!
Yes a 3db difference should be noticeable but with other noises present it probably isn’t much discernible. I can garauntee that in a double blind test you would fail to identify either drive by name.
Thanks for making my point. The seagate drive is the one rated at 37db at performance seek and the WD 32db so the seagate drive should be perceived as twice as loud as the WD. But you claimed the contrary indicating that your observation was biased and didn’t perceive correctly.
BTW I have the EIDE version of the 7200.8 200GB in my tivo and it is just as loud the WD in my g5.
I did like the article and thought it was quite good but your biased opinions detracted from the quality. I know you want to defend your baby but please be reasonable.
Edited 2005-11-16 05:45
Yes they are but I was talking about hardware and the point I was responding to in the article, so were you.
And fans aren’t part of the hardware? Hardware == the pieces you can throw at people. Fans are hardware.
Since your links don’t load, I’ll leave that bit for later.
Why because they prove you wrong?
Because they don’t prove anything. Citing decibel figures without citing distances is like saying that person A is taller than person B because his image in a photo is an inch larger. Without knowing distance, decibel figures are meaningless!
Did you measure the noise levels of the two drives and get real data to counteract the published specs.
Of course not. I didn’t need an SPL meter to tell which one was louder!
Really and I should take your word for it.
Go to silentpcreview.com and post on the forum that you trust WD’s (or Seagate’s!) decibel specs. You’ll be laughed off the site, I guarantee you. Manufacturer’s db figures are about the most unreliable measurers in existence — moreso than PSU watt ratings!
I have actually been to my companies anechoic chamber to measure the fan output levels for a workstation.
Am I supposed to be impressed? Unless you measured the specific hard drives in question, this information is useless to me. All it shows me is that you should have a better grasp of the meaning of a decibel than you do.
Paranoid today aren’t we.
About what? I’m not saying that WD cheated on their noise results. I’m saying that, since there are no standards for measuring the loudness of a computer component, the two numbers in question cannot be compared. Are they both at 3m? 1m? The specs don’t say that!
But I should take your ears to be more accurate.
I’m not saying my ears are more accurate than Western Digital’s SPL meters. I’m saying that my ears are both 1m from the damn hard drives, and I can hear that the WD is louder. I have no idea how far WD’s SPL meter is from the drive, and how far Seagate’s is. Without those figures, the numbers cannot be compared! I could say the Seagate is 0db and the WD is 32db, if I measured them at the right distances!
Yes a 3db difference should be noticeable but with other noises present it probably isn’t much discernible. I can garauntee that in a double blind test you would fail to identify either drive by name.
You can guarantee this without having heard either. Well shucks. For your information, I’m not the only one that feels this way. http://forums.macnn.com/showthread.php?t=273760. Search for “Got my new DC 2.0 today.” The post is by iDaver. I quote:
“There’s a Western Digital hard drive in this Mac; the second such HD I’ve owned. Their constant clickity clackity when accessing is obnoxiously loud. As soon as I can, I’m going to replace this hard drive with something bigger and quieter and relegate the WD to backup status. I would never recommend a WD hard drive to anyone.”
The seagate drive is the one rated at 37db at performance seek and the WD 32db so the seagate drive should be perceived as twice as loud as the WD.
Separate point entirely. You claimed that differences of 4db would not be noticeable. I did not stipulate that the Seagate was 4db louder than the WD, only that a 4db difference would indeed be noticeable.
But you claimed the contrary indicating that your observation was biased and didn’t perceive correctly.
Could you at least try following the argument? I did not acquiesce that the Seagate was louder. I only stated that even if your figures were correct, the difference between the two would be easily perceptible. This is especially true considering the high-frequency of hard-drive seek noise, which would make it stand out from the ambient background.
BTW I have the EIDE version of the 7200.8 200GB in my tivo and it is just as loud the WD in my g5.
The 200GB 7200.8 is internally different from the 250GB 7200.8. The former uses the older 100GB platters, while the latter uses the newer 133GB platters. Furthermore, do you have the 2500JS in your G5? Unless you do, your little observation isn’t really relevant, is it?
And fans aren’t part of the hardware? Hardware == the pieces you can throw at people. Fans are hardware.
Sure by some contorted logic a machine with quieter fans has vastly superior hardware to machine with ECC memory and slots for 4 times time memory.
Cearly fans are the primary decision making point when it comes to choosing between two computers!
Since your links don’t load, I’ll leave that bit for later.
Sure. But try this link anyway. samsung does make DIMMS and one of them is in the G5. Apple generally uses Micron, Samsung or Hynix branded DIMMS. A friends power book did come with nanya DIMMS. All name brands not generic.
http://www.samsung.com/Products/Semiconductor/DDR_DDR2/ddr2.htm#ddr…
You can guarantee this without having heard either.
I have heard both.
Well shucks. For your information, I’m not the only one that feels this way. http://forums.macnn.com/showthread.php?t=273760. Search for “Got my new DC 2.0 today.” The post is by iDaver. I quote:
Here is what the same person said a week later!
iDaver:
“I’ve had my DC 2.0 for over a week now. It’s pretty quiet and there are no problems at all. It’s very fast. ”
Let’s see same thread other views.
jamil5454:
“Hmmm… I always thought my Rev.C Dual 2.0 was a very quiet machine, but then again, I have it sitting on a platform underneath my desk. And I really can’t hear my hard drive, except when I bent down and listened for it.
Mine’s a WDC WD1600JD-41HBC0.
I’ve got Samsung RAM originally in mine, then I added 2×512 Crucial recently and it works great. I just wish I had more slots so I don’t have to pull RAM out to put more in.”
hardcat1970:
“i got my new dual 2.0 and have the same hard drive. It does make noise but it is not too loud. ”
Footy:
“Slim,
It’s been extremely stable and very quiet. I did fork out for ECC memory but that’s really just a personal preference. The 256 card is ok and it play all my games very well except Tiger Woods 2005, and that’s probably more to do with the Aspry port than anything. However I would have preferred a new ATI card and as soon as they release a PCIe card I’m on it. ”
Your point?
Separate point entirely. You claimed that differences of 4db would not be noticeable. I did not stipulate that the Seagate was 4db louder than the WD, only that a 4db difference would indeed be noticeable.
http://www.seagate.com/support/kb/disc/tp/acoustics.html
“Today, nearly all PCs ship with fans. A quiet fan is about 34 dB. This overwhelms most drive noise. For example, a 28 dB drive combined with a 34 dB fan will result in about 35.0 dB audible noise, while a 30 dB drive combined with the same fan will result in about 35.5 dB audible noise. This 0.5 dB is imperceptible to the user.”
So unless you are listerning to the drives in isolation you can’t possibly conclude one is louder than the other. You are comparing the sound of the seagate drive in the X2 chasis with the WD in tht G5 chasis. By your admission you agree that the X2 chasis is better damped. Put the seagate drive in the G5 chasis and have a listen. I bet you will find it equally loud.
I’m not saying my ears are more accurate than Western Digital’s SPL meters. I’m saying that my ears are both 1m from the damn hard drives, and I can hear that the WD is louder. I have no idea how far WD’s SPL meter is from the drive, and how far Seagate’s is. Without those figures, the numbers cannot be compared! I could say the Seagate is 0db and the WD is 32db, if I measured them at the right distances!
There are industry standards specified for measuring acoustics. Seagate and WD, I am positive follow these standards set by ISO, ANSI and ASTM. Both publish sound power levels in the A-weighted scale.
http://zone.ni.com/devzone/nidzgloss.nsf/webmain/980606db5a9ded4486…
From the above seagate link.
“Reality Versus the Lab or Chamber
The industry-standard laboratory practice for measuring the acoustics of disc drives is to suspend the drive in a special sound-absorbing chamber (called an anechoic or hemi-anechoic chamber). The chamber eliminates or reduces reflected noise and noise from outside sources. Sound pressure measurements use a single microphone and sound power measurements use a spherical arrangement of many microphones. Prominent tones are also monitored. Results are reported using the A-weighted scale.
While this practice can accurately measure the airborne acoustic emissions from a disc drive, we all know that drives are never used while suspended in a special chamber. Rather, they are mounted in a chassis, which is in turn placed in some kind of enclosure that forms the physical system (see chassis section below).
The real question most customers want answered is, “How loud will this drive sound in my system?” Unfortunately, different drive/system combinations may yield different answers. For example, a drive that sounds quietest while suspended in an acoustic measuring chamber may actually sound louder when mounted in a system, while a drive that sounds quietest when in a system may not be the quietest when tested stand-alone. Several factors contribute to this disparity, including prominent tones, structure-borne acoustics, measuring techniques, objective measurement versus subjective hearing and so on.
Generally speaking, drives with quieter stand-alone acoustic specifications will also sound quieter in systems and vice versa. Very minor specification differences, 1–2 dB, might be indistinguishable or even reversed when the drive is in a system. Furthermore, differences of 1–2 dB are difficult for the human ear to detect. Annoying or prominent tones created by the structure-borne acoustics may also affect user perception of overall system acoustics.
Seagate follows the industry-standard process of specifying sound power acoustics (both seek and idle) for stand-alone drives. Our chamber, test methodologies and reporting all conform to the pertinent and applicable portions of various ISO, ANSI and ASTM standards. We also perform extensive subjective tests using statistically valid sample sizes to evaluate both stand-alone drives and drives inside systems. These tests are performed on both Seagate and competitors’ drives in an ongoing effort to make Seagate’s drives as quiet and unnoticeable as possible in users’ systems.”
Could you at least try following the argument? I did not acquiesce that the Seagate was louder. I only stated that even if your figures were correct, the difference between the two would be easily perceptible. This is especially true considering the high-frequency of hard-drive seek noise, which would make it stand out from the ambient background.
You try and follow the arguments. If the specs are correct the WD drive is the queiter drive. Get it.
The 200GB 7200.8 is internally different from the 250GB 7200.8. The former uses the older 100GB platters, while the latter uses the newer 133GB platters. Furthermore, do you have the 2500JS in your G5? Unless you do, your little observation isn’t really relevant, is it?
Not according to seagates site the drives share the same quiet electronics the platter density shouldn’t make that much difference. I do have a WD2500JS in my G5.
Edited 2005-11-16 07:41
Sure by some contorted logic a machine with quieter fans has vastly superior hardware to machine with ECC memory and slots for 4 times time memory.
I said, I quote, “superior in almost every respect”. The memory support would be the reason for the “almost”, as well as the extra PCI Express slots.
Clearly fans are the primary decision making point when it comes to choosing between two computers!
Given that the majority of PowerMac buyers will be running with less than 4GB of RAM and non-ECC memory, I’d say yes, the noise factor of the system is a more important concern. To me, who has only 2.5GB in the machine and has no use for ECC, the RAM factor is completely unimportant. Furthermore, I’d point out that the 4GB limit is simply the result of the motherboard I chose to use. If I had needed more than 4GB, I would’ve used a different motherboard. Since the X2 memory controller supports 2GB DIMMs, this wouldn’t have changed the cost (or the conclusion) significantly.
Sure. But try this link anyway. samsung does make DIMMS and one of them is in the G5. Apple generally uses Micron, Samsung or Hynix branded DIMMS. A friends power book did come with nanya DIMMS. All name brands not generic.
I acquiesce that Samsung makes DIMMs. I still disagree about your usage of “name brand”. Generally, “name brand” is considered something like Crucial, Corsair, Mushkin, etc. RAM from chip manufacturers like Micron, etc, usually hang at the bottom of the Pricewatch lists, and I’ve never heard anybody refer to them as “name brand”.
You can guarantee this without having heard either.
When’d you sneak into my apartment?
Your point?
Does the fact that he changed his argument later have a bearing on his initial reaction? I’d argue that he clearly thought the machine was loud, then got used to it over time.
Your line of argument about the hard drive is really kind of silly. I have the two machines sitting inches apart. I can easily hear the WD seek, and I can barely hear the Seagate seek. You can try to convince me that I don’t hear what I hear very clearly, but you’re not going to get anywhere with that. You can say I’m lying, that I’m making up the problem to make the G5 look bad, but seriously: if I wanted to make the G5 look bad, couldn’t I have just fudged the benchmarks?
So unless you are listerning to the drives in isolation you can’t possibly conclude one is louder than the other.
34db is a quiet fan? In what alternate reality? The fans in my PC are rated at 23db running at 1000rpm. With these fans, the hard drive is the loudest thing in the system, and every bit of difference is noticeable. Again, the same thing is true for the Dell machines in here. You can easily hear every nuance of the hard drive’s sonic signature.
You are comparing the sound of the seagate drive in the X2 chasis with the WD in tht G5 chasis. By your admission you agree that the X2 chasis is better damped. Put the seagate drive in the G5 chasis and have a listen. I bet you will find it equally loud.
Okay, ripped the covers off, took the Seagate off its isolation mounts. The difference at idle is not noticeable. The difference in seek is still there and still significant. I’d do a double-blind, but you wouldn’t trust the results anyway! Now that the covers are back on, the difference is as significant as ever. I suppose I should simply shift some of the blame from the hard drive to the G5’s case? I’m perfectly willing to do that.
Specified for measuring acoustics. Seagate and WD, I am positive follow these standards set by ISO, ANSI and ASTM.
If a number doesn’t have a mark next to it saying the standard under which it was measured, it wasn’t measured under any standard worth considering. Moreover, I’d point out that the standard you have listed has nothing to do with PC components, and still doesn’t give distances! On top of all of that, you’re expecting me to trust the numbers from a bunch of guys who think a 35db fan is quiet? That page is little more than feel-good marketing material.
You try and follow the arguments. If the specs are correct the WD drive is the queiter drive.
The specs do not list distances. Why can you not get this through your thick skull? PC components are tested under a very wide variety of circumstances, at a very wide variety of distances. Thermaltake sells fans that are rated at 18db, which are easily 28db+. Manufacturer’s specs mean nothing.
Not according to seagates site the drives share the same quiet electronics the platter density shouldn’t make that much difference.
What are “quiet electronics”? Most electronics are quiet. The noise is from the motor, platter, and head assembly. The difference isn’t just platter density — the platter and head are completely different generations of product. Seagate doesn’t manufacturer different platter densities using the same technology — when there are differences, it’s because certain drives use the previous-generation tech.
How loud is your typing?
Given that the majority of PowerMac buyers will be running with less than 4GB of RAM and non-ECC memory, I’d say yes, the noise factor of the system is a more important concern. To me, who has only 2.5GB in the machine and has no use for ECC, the RAM factor is completely unimportant.
I am guessing here on a limb that Apple just didn’t decide and go through extra development costs and operation overhead to offer ECC support on powermacs without market demand.
Powermacs are used by Pros as well as enthusiasts. The pros seldom hangout in mac forums where you seem to gather “important marketing” data.
My guess is folks at virgina tech and all the desgin, film production studios decided to request Apple to put ECC in.
I think you are confusing the target market of the Powermac with that of the iMac G5.
Furthermore, I’d point out that the 4GB limit is simply the result of the motherboard I chose to use. If I had needed more than 4GB, I would’ve used a different motherboard. Since the X2 memory controller supports 2GB DIMMs, this wouldn’t have changed the cost (or the conclusion) significantly.
It doesn’t matter what motherboard you chose. It is in your X2 which you compared to the G5 and is clearlt inferior in most regards.
It doesn’t matter what motherboard you chose. It is in your X2 which you compared to the G5 and is clearlt inferior in most regards.
I said the same thing in the article!
With regards to the ECC bit. The X2 doesn’t have ECC because I don’t need ECC. We’re not comparing two machines you can buy pre-built here. The X2 is simply an example of a machine that can easily be built at a relatively low price using off-the-shelf PC components. If you needed ECC, and wanted to make a serious comparison between the two platforms, you’d just replace the motherboard with a Tyan K8E. This would cost you $200, about $70 more than the motherboard in the X2. You’d also put in an Opteron 175 (the Opteron version of the X2’s chip). This would cost you $500, about $140 less than chip in the X2. If you want to have the ECC battle, go ahead. Just change these two components, and take $70 off the cost of the X2!
With regards to the ECC bit. The X2 doesn’t have ECC because I don’t need ECC. We’re not comparing two machines you can buy pre-built here. The X2 is simply an example of a machine that can easily be built at a relatively low price using off-the-shelf PC components. If you needed ECC, and wanted to make a serious comparison between the two platforms, you’d just replace the motherboard with a Tyan K8E.
That’s exactly why I think your conslusion is flawed. You are trying to compare very different things.
This would cost you $200, about $70 more than the motherboard in the X2. You’d also put in an Opteron 175 (the Opteron version of the X2’s chip). This would cost you $500, about $140 less than chip in the X2. If you want to have the ECC battle, go ahead. Just change these two components, and take $70 off the cost of the X2!
The opteron 175 accroding to froogle (your source) cost anywhere from $508-$926. The X2 4400+ anywhere from $131-$418.
Sorry your prices don’t add up even for a few months ago. The opteron would still cost a few hundered more than the X2.
Let’s assume the cost differential for the tyan board and the opteron is $270. That is quickly eating into that $700 price difference.
First, you made the same Froogle mistake I did when I priced out the 2500JS. The price ranges listed can be inaccurate. If you have open up the page and actually read the listed price. For example, the $131 figure is for a single-core 2800+. The cheapest 4400+ is $511, from eWiz. The cheapest Opteron 175 is $508, from AxentMicro. However, the prices in my article are based on what I paid for the components several months ago. At the time, I paid NewEgg $640 for the privilege of getting an X2 before the summer was out.
I should also point out that you don’t need an Opteron. The X2 can handle ECC memory just fine.
The Tyan board, as I said, is about $200 (not the cheapest you can find it, either).
> Sure. But try this link anyway. samsung does make
> DIMMS and one of them is in the G5. Apple generally
> uses Micron, Samsung or Hynix branded DIMMS. A
> friends power book did come with nanya DIMMS. All
> name brands not generic.
These aren’t premium brands. These are “generic” brands in as much as any random OEM will use them. These memories are given limited testing, have poor warrantied timings, don’t have heat spreaders, and have little headroom.
Crucial for example is Micron memory with a different brand name to set it apart as “quality.” Keep in mind that there are only a few DDR/DDR2 manufacturers in world. In picking Hynix, Samsung, or Micron branded memory you’re picking from the ass-end of the spectrum.
Which doesn’t necessarily mean that you’ll have any problems with it, but it’s also not premium hardware which is the myth typically propogated by overzealous Mac users.
Edited 2005-11-16 09:43
Just so you don’t keep saying Crucial with such disdain: I was not implying that Crucial was “premium RAM” (though it’s certainly name-brand). Since the G5 uses fixed RAM latencies, I see little reason to spring for the low-latency stuff. The Athlon64 likes it however (I think the memory in the X2 is Patriot CAS 2 — I remember it was one of the few CAS 2 1GB sticks when I bought it).
These aren’t premium brands. These are “generic” brands in as much as any random OEM will use them. These memories are given limited testing, have poor warrantied timings, don’t have heat spreaders, and have little headroom.
Bullshit. Samsung is name brand memory used by OEMS. Becuase it doesn’t have heatspreaders and is overclocked doesn’t make it generic.
Crucial for example is Micron memory with a different brand name to set it apart as “quality.”
The DIMMS I got from curical have two sticker the original micron sticker that goes on the OEM parts and Crucial sticker on the side. They are the exact same memory.
Crucial is the consumer brand, they slap a heatsink and over clock the DIMMs and suddenly it becomes a name brand. Please!.
Keep in mind that there are only a few DDR/DDR2 manufacturers in world. In picking Hynix, Samsung, or Micron branded memory you’re picking from the ass-end of the spectrum.
Unless you have been living under a rock, samsung has become one of the leading manufacturers in the world. In consumer electronics they have fast out progressed even sony.
Which doesn’t necessarily mean that you’ll have any problems with it, but it’s also not premium hardware which is the myth typically propogated by overzealous Mac users.
Yeah right premium means a colored heatspreader. I have mushkin memory in my Athlon Xp box sorry to me that was an unheard of no name brand when I bought my PC.
The dimms gamers use aren’t premium hardware they are designed to be overclocked hence the heatspreaders. No manufacurer, barring alienware would put that in a box. Micron and samsung are what most OEMS go to for DIMMS. They are name brand manufacturers.
Samsung is name brand memory used by OEMS. Becuase it doesn’t have heatspreaders and is overclocked doesn’t make it generic.
Should have been. Samsung is name brand memory used by OEMS. Becuase it doesn’t have heatspreaders and isn’t overclocked doesn’t make it generic.
Are you being intentionally retarded? I really have little patience for this asinine stance you have. Samsung is not a premium brand. It’s not even a common brand, actually. They were a premium brand for RDRAM several years ago, but that’s about it. It doesn’t matter that Samsung is a “leading manufacturer,” neither their brand nor that model of memory is a premium product. Since you base your entire argument around the brand of the product, rather than the model, I am left with no option but to point out that Samsung is not a brand associated with premium DIMMs, regardless of their manufacturing.
Premium memory is tested, selected, and warrantied for high-performance and aggressive timings. That stick of Samsung memory you linked to is of no higher quality than something out of a Dell machine. Please live in your dream world if you want, but save me the attitude that accompanies your ignorance.
Are you being intentionally retarded? I really have little patience for this asinine stance you have. Samsung is not a premium brand.
The only person with an Asinine stance is you. Define premium?
Since you base your entire argument around the brand of the product, rather than the model, I am left with no option but to point out that Samsung is not a brand associated with premium DIMMs, regardless of their manufacturing.
According to whom? you? What credentials do you have to specify which brand is premium. Major computer makers use Samsung DIMMS.
Premium memory is tested, selected, and warrantied for high-performance and aggressive timings. That stick of Samsung memory you linked to is of no higher quality than something out of a Dell machine. Please live in your dream world if you want, but save me the attitude that accompanies your ignorance.
Really Samsumg DIMMS are not tested. Apple goes through no QA! Only Corsiar and Mushkin do those things. Right.
The brands you claim to be premium are overclocked DIMMS with heatspreaders. The fact that they are tested with aggressive timing and thier heat dissipation is controlled for those agressive timings doesn’t make them premium. It makes them overclocked for overclock frenzied gaming junkies.
Those DIMMS use the same Memory chips that go into the OEM DIMMS, you know the chips that need to respond more agressively. Samsung tests thier DIMMS no less than the others. Infact thier are the ones that put the tolerence is the chips to make it possible for Corsair to overcock them.
Do you understand how fabrication and testing works? Despite me specifically telling you things repeatedly you choose to repeat them as if they are insightful. There are very few manufacturers of DRAM. That however doesn’t mean that the bins that CAS4 RAM you own came from underwent the same degree of testing as the premium brands used for the warrantied performance memory. If you think it did, then the answer to my question is “no.” That’s the value-add. That’s why people that buy premium memory don’t typically buy Samsung and do buy Mushkin, OCZ, etc.
As for Apple having QA, what does that matter? Do you think they test bins of memory that they obtain from multiple sources to ensure they’re of the same caliber as an actual premium RAM supplier?
http://www.macsimumnews.com/index.php/archive/apple_dell_hp_deal_wi…
Wow that Apple QA really saved the day there didn’t it, or do you really want me to enumerate the troubles that have befallen them over the years?
Do you think the RAM used by Apple is of greater or lower quality than the RAM used by PC OEMs?
And it’s not my fault you don’t know anything about “premium RAM brands.” Maybe you should bother to see what people that buy top-end hardware actually buy, people that perform benchmarking actually use, and ask yourself why “overclock frenzied gaming junkies” purchase it.
No, of course not. You having no idea what you’re talking about, now that’s evidence.
(Fixed random obvious grammar mistakes passing by my fluishing brain)
Edited 2005-11-16 17:27
Do you understand how fabrication and testing works? Despite me specifically telling you things repeatedly you choose to repeat them as if they are insightful. There are very few manufacturers of DRAM. That however doesn’t mean that the bins that CAS4 RAM you own came out of underwent the same degree of testing as the premium brands use for the warrantied memory. If you think they do, then the answer to my question is “no.” That the value-add. That’s why people that buy premium memory don’t Samsung and do buy Mushkin, OCZ, etc.
Why does it matter what speed bins the Samsung chips in my G5 came out off? They are never going to be overclocked. They are tested and work at CL 4-4-4-12 and that is waht matters. I don’t care that Corsair, OCZ pick higher speed bin samsung parts to overclock. If I put a corsiar cl 3-3-3-8 in my G5 it will still only work at 4-4-4-12. At that point it is just a Samsung DIMM with a fancy heatspeader. There is nothing premium about that. Those low latency DIMMs are for gamers that want an extra fps. It is aking to a guy buying a M5 and another getting a Dinan kit on the M5. The Dinan M5 doesn’t become a premium brand and BMW the generic. It means that the Dinan is tweaked for performnace and sound.
Get it.
Do you think the RAM used by Apple is of greater or lower quality than the RAM used by PC OEMs?
No I don’t think OEM choices are bad. OEM choices are tested and designed to work with the product I purchased and that is good enough. I don’t care for ego massaging by putting a so called enthusiast module in my box for no added benifit.
And it’s not my fault you don’t know anything about “premium RAM brands.” Maybe you should bother to see what people that buy top-end hardware actually buy, people that perform benchmarking actually use, and ask yourself why “overclock frenzied gaming junkies” purchase it.
Sorry I don’t consider them Premium I do consider them gamming or enthusiast brands. Top-end hardware to me is stuff that does ECC and garuntees data intergrity and offers a balanced performance. Not ooh I got 2 extra fps playing doom 3.
No, of course not. You having no idea what you’re talking about, now that’s evidence.
Oh dear…
That your G5 can’t make use of higher-performance memory doesn’t make it any less value-added.
Out of curiosity do you know what ECC does? And do you think that all ECC memory of the same quality? Premium ECC memory is an entirely different market segment. Much as a luxury sedan is an entirely different market than a limousine.
And before I forget, I defined “generic” as “in as much as any random OEM will use them” which you miraculously managed to repeat in a reply to my comment, and you don’t even disagree that these don’t afford any of the value-added performance guarantees, nor that stick of memory is superior to the random PC OEM. Now all you have to do is stop pretending the only value added to memory certified to work in various configurations or with certain timings differs solely in the presence of a heat spreader and you’re all the way there. If for some reason you think I bothered to put “generic” in quotes in order to suggest that “micron” and “samsung” are not names, then I’m impressed.
Edited 2005-11-16 17:47
That your G5 can’t make use of higher-performance memory doesn’t make it any less value-added.
What is the value add?? The fancy heatspeader and the Corsair name??? What exactly is the value that the enthusiast DIMM is provding for the added $$$s?
Out of curiosity do you know what ECC does? And do you think that all ECC memory of the same quality? Premium ECC memory is an entirely different market segment. Much as a luxury sedan is an entirely different market than a limousine.
Yes I do. What is a premium ECC memory Brand? Corsair doesn’t even make ECC memory and they only have buffered DDR1 DIMMs. I am curious what is “premium ECC memory”
And before I forget, I defined “generic” as “in as much as any random OEM will use them” which you miraculously managed to repeat in a reply to my comment, and you don’t even disagree that these don’t afford any of the value-added performance guarantees, nor that stick of memory is superior to the random PC OEM.
Your definition of generic is flawed at best. Let’s see hartge mods premium cars for appearance. That is thier value add. Since there is a hartge BMW 6 series in the market, a value added BMW, carried by a few BMW dealers . Does it mean that BMW now is a generic brand of cars because every BMW dealer stocks the factory trim version and only a few carry the hartge one. Or extend that example to dinan for performance.
The BMW is still a premium car brand even if value-added brands that customize BMW exist. Get it.
Now all you have to do is stop pretending the only value added to memory certified to work in various configurations or with certain timings differs solely in the presence of a heat spreader and you’re all the way there.
Please tell me what the technical difference is between a Corsair DIMM and a Samsung DIMM with the same memory chips. Why would you need a heat spreader on slower clocked DIMMS?
I wonder why Corsair’s Valueselect line has no timing information or heatspreaders on them. Is the value select line from Corsair still considered premium memory according to you?
If for some reason you think I bothered to put “generic” in quotes in order to suggest that “micron” and “samsung” are not names, then I’m impressed.
If for some reason you think I bothered to put “generic” in quotes in order to suggest that “micron” and “samsung” are not names, then I’m impressed.
What? You lost me.
BTW you were claiming crucial was a premium brand, till I told you that crucial sells the same OEM micron sticks and adds a crucial label. Wonder why?
Edited 2005-11-16 19:00
> What is the value add?? The fancy heatspeader and the
> Corsair name???
Tested memory that is certified for aggressive timings, bandwidth, dual-channel configurations, and the quality of the circuitry of the modules. These are the highest-quality are resilient batches of memory supplied with guarantees of performance. And this isn’t just a matter of Corsair, it’s a matter of premium memory. Which doesn’t exist for you, outside of the uniform-quality ECC DIMMs that you don’t mind replacing just as long as they detect they’re pieces of crap before they go.
> Yes I do. What is a premium ECC memory Brand? Corsair
> doesn’t even make ECC memory and they only have
> buffered DDR1 DIMMs. I am curious what is “premium
> ECC memory”
OCZ does provide ECC memory, so does Mushkin.
If you understand how ECC memory works you know that ECC memory quality is as variable as any other DRAM, and that the error correction is only useful for correcting one-bit errors and does nothing to ensure the quality of the memory. Premium ECC memory will be from the better bins, have quality circuitry, and all of the niceties associate with selectiveness.
> Your definition of generic is flawed at best.
My definition of generic applied directly to what I said. Though your continued allusion to BMWs (now I’m certain you’re an Apple zealot, btw) is certainly comparable to the DRAM industry.
> Please tell me what the technical difference is
> between a Corsair DIMM and a Samsung DIMM with the
> same memory chips. Why would you need a heat spreader
> on slower clocked DIMMS?
What do you mean the same memory chips? Do you mean identical memory chips put through no different sorting with no different circuitry? What would be the difference in that? Hey, what’s the difference between 3.6GHz and a 3.4GHz Xeon? Did you ever get around to that whole GPU discussion?
> I wonder why Corsair’s Valueselect line has no timing
> information or heatspreaders on them. Is the value
> select line from Corsair still considered premium
> memory according to you?
No. Why would it? Don’t tell me, despite that I told you that these brands have value models hours ago, you’ve just now discovered them. To think, you only need to Google everything someone says to construct a half-assed response.
> BTW you were claiming crucial was a premium brand,
> till I told you that crucial sells the same OEM
> micron sticks and adds a crucial label. Wonder why?
And I quote me:
“Samsung is a company, not a model of memory. They provide a large variety of modules, but they are not a premium performance brand. Mushkin, OCZ, and Corsair are premium brands (though they have ‘value’ models). Crucial isn’t a premiun brand either, so it’s not a big improvement.”
“Crucial for example is Micron memory with a different brand name to set it apart as “quality.” Keep in mind that there are only a few DDR/DDR2 manufacturers in world. In picking Hynix, Samsung, or Micron branded memory you’re picking from the ass-end of the spectrum.”
Hey, maybe you can repeat me some more and lie about what I’ve said. Quality, that.
You can pick your designation from (1) illiterate (2) stupid (3) troll that’s wasted my time.
Edited 2005-11-16 19:18
Tested memory that is certified for aggressive timings, bandwidth, dual-channel configurations, and the quality of the circuitry of the modules. These are the highest-quality are resilient batches of memory supplied with guarantees of performance.
All this means what exactly at 4-4-4-12 timings? Absolutely nothing!! So no value add. On rayiner’s G5 a cl 3-3-3-8 memory isn’t better or more reliable. You can pretend all you want but the memory that comes with the G5 is Good not some cheap generic no name brand as you and Rayiner seem to suggest.
OCZ does provide ECC memory, so does Mushkin.
They have no DDR2 533Mhz ECC memory in thier catalog. Hmmm.. the G5 needs one of those. Let me see. Corsair, Mushkin and OCZ have none. Premium indeed. They are hobbist gaming enthusiast brands. Not serious companies making serious products for the entire computing industry.
If you think top end computing is an X2 or pentium Dxxx with OCZ or corsair or mucskin DIMMS. I have bad news for you about Santa Claus and the tooth fairy.
If you understand how ECC memory works you know that ECC memory quality is as variable as any other DRAM, and that the error correction is only useful for correcting one-bit errors and does nothing to ensure the quality of the memory. Premium ECC memory will be from the better bins, have quality circuitry, and all of the niceties associate with selectiveness.
Really. You can do multi-bit correction with ECC and chip kill. Sun’s recently announced UltraSPARC T1’s memory controller supports quad error correct and octal error detect ECC and chipkill.
According to me that is top-end computing. Not Corsair, OCZ and mushkin in a gaming PC.
No. Why would it? Don’t tell me, despite that I told you that these brands have value models hours ago, you’ve just now discovered them. To think, you only need to Google everything someone says to construct a half-assed response.
No I knew they had a value brand. I bought mushkin DIMMS before remmember. I was looking a t corsair the orher day. But is thier value brand any better than the samsung DIMM? No google query will answer an opinion I thought you would provide. Now answer the question.
“Crucial for example is Micron memory with a different brand name to set it apart as “quality.” Keep in mind that there are only a few DDR/DDR2 manufacturers in world. In picking Hynix, Samsung, or Micron branded memory you’re picking from the ass-end of the spectrum.”
Hey, maybe you can repeat me some more and lie about what I’ve said. Quality, that.
Scroll down in that link and take a look at the picture of the Crucial DIMM. That is exactly like the DIMM I got from crucial for my G5.
http://www.short-media.com/printcontent.php?print=r&id=290
What value or “quality” is that extra curcial sticker adding to the micron DIMM. Nothing? Only naive idiots like you think a subsidiary or spin off means quality somehow.
You can pick your designation from (1) illiterate (2) stupid (3) troll that’s wasted my time.
Or Smarter and less gullible than you and you don’t want to admit it.
Edited 2005-11-16 19:58
You can pretend all you want but the memory that comes with the G5 is Good not some cheap generic no name brand as you and Rayiner seem to suggest.
The only part in the Mac that I suggested was cheap was the Western Digital. I did so because when I bought the X2’s drives, the 2500JD was running in a completely different price bracket from the Maxtor and Seagate 250GB drives I was considering. This is because it was previous-generation technology (older platter designs, etc). For the rest of the parts, I simply suggested that they were nothing you wouldn’t find in a Dell. You’ve done nothing but prove that. Dell doesn’t use no-name parts in their computers. They use cheap, often generic, but decent parts. My last Dell tower came with Micron RAM, for example.
The whole point of the exercise was to debunk the idea that Apple uses particularly high-quality parts in their machines. The underlying implication, of course, is also that these parts cost more, justifying the price premium of Apple computers. All you’ve done is debunk these claims with your insistence that quality fans don’t matter, or quality RAM doesn’t matter. Maybe they don’t. That doesn’t change the fact that Apple machines use the same parts Dell machines do.
The only part in the Mac that I suggested was cheap was the Western Digital. I did so because when I bought the X2’s drives, the 2500JD was running in a completely different price bracket from the Maxtor and Seagate 250GB drives I was considering.
Is the WD2500JD in the g5? or is it the WD2500JS? Different drive SATA-II and the price checks prove you wrong.
Dell doesn’t use no-name parts in their computers. They use cheap, often generic, but decent parts. My last Dell tower came with Micron RAM, for example.
Did I claim that Dell used cheaper parts than Apple? There is nothing wrong with Micron DIMMs.
The whole point of the exercise was to debunk the idea that Apple uses particularly high-quality parts in their machines.
They do use high quailty parts. They just don’t meet your silent PC standards that are personal to you and only you.
The underlying implication, of course, is also that these parts cost more, justifying the price premium of Apple computers. All you’ve done is debunk these claims with your insistence that quality fans don’t matter, or quality RAM doesn’t matter. Maybe they don’t. That doesn’t change the fact that Apple machines use the same parts Dell machines do.
The are more expensive. We just did the excercise of adding ECC memory to you X2 and it increased the price.
The WD drive is more expensive than the seagate in your 7200.8 and is rated to be quieter and support AAM.
I didn’t say qaulity RAM doesn’t matter. I took offense to you claim that the RAM that came witht he G5 is “generic” and the one in your X2 is name-branded. The G5 uses name branded DIMMS (micron, samsung). Since you claimed to have installed curicial memory in the G5 along with the no name one.
It is even more funny that you made no mention of the brand of memory in the X2 all the article says is CAS2 DDR 400.
X2:
Processor: AMD Athlon X2 4400+ @ 2.2 GHz
Motherboard: DFI LanParty nForce4
Cache Memory: 1MB of full-speed on-die L2 per core
System Memory: 2GB of CAS2 DDR-400 SDRAM
Graphics Card: Gigabyte GeForce 6600GT 128MB
Hard Drive: Seagate 7200.8 250GB SATA
Optical Drive: MSI SATA CD-RW
Power Supply: Seasonic S12 380W
Case: Antec P180
Operating System: Ubuntu AMD64 5.10
> All this means what exactly at 4-4-4-12 timings?
> Absolutely nothing!!
The constraints of the processor don’t change the status of the brand, which is the whole point of this. The RAM shipped in the PowerMac isn’t ‘premium’ memory, it’s just common OEM memory. Do you think I’m suggesting Apple pay $50+ more for memory? No. The cost doesn’t match the benefit.
> On rayiner’s G5 a cl 3-3-3-8 memory isn’t better or
> more reliable.
Well reliability is another matter. It isn’t any faster when used there. ECC memory isn’t useful at all in his X2 configuration.
> G5 is Good not some cheap generic no name brand as
> you and Rayiner seem to suggest.
It’s cheap mediocre memory. It isn’t nameless as Rayiner suggested, but I don’t really care about what Rayiner suggested. That precisely why I quoted “generic” and provided an appropriate meaning.
> They have no DDR2 533Mhz ECC memory in thier catalog.
I didn’t say that they did. I said that they had ECC memory, though they’re essentially in a different market segment.
> Not serious companies making serious products for the
> entire computing industry.
Your good friend BMW makes a lot of budget cars. The entire computing industry isn’t desktop computers. In fact the computing industry at large doesn’t consist of ‘premium’ products. No aggregate does. That’s not premium, that’s average. What does your watch need with 800MHz DDR2?
> If you think top end computing is an X2 or pentium
> Dxxx with OCZ or corsair or mucskin DIMMS. I have bad
> news for you about Santa Claus and the tooth fairy.
HPC has nothing to do with this discussion. That’s not even remotely the same market. Buying high-performance desktop memory for anything that require reliability is retarded. Though the processor concern is a little amusing. Multicore Xeons and Opterons will fill far more slots on the Top500 than the PowerMac will.
> Really. You can do multi-bit correction with ECC and
> chip kill. Sun’s recently announced UltraSPARC T1’s
> memory controller supports quad error correct and
> octal error detect ECC and chipkill.
If you want to pay in latency, cost, and space you can implement all sorts of codes. I’m not sure what that matters to the discussion of the G5 and the Athlon64, since they don’t.
> No I knew they had a value brand. I bought mushkin
> DIMMS before remmember.
I think I’ve been clear-enough about what I’ve stated that precludes the lack of any process improvements from producing superior products. I think I’ve been perfectly clear from the onset that 1) You shouldn’t compare brands you should compare models 2) If you’re going to compare brands, then the brands considered “premium” are not Samsung, Micron, et al 3) The reason they’re considered premium are previously outlined
> What value or “quality” is that extra curcial sticker
> adding to the micron DIMM. Nothing? Only naive idiots
> like you think a subsidiary or spin off means quality
> somehow.
None. Crucial isn’t a premium brand. The “quality” is solely in Micron shedding its name (as a good supplier in your opinion) for sales. I specifically stated that it wasn’t a premium brand. I really hate repeating myself, and I don’t like you repeating what I say as if it’s original.
> Or Smarter and less gullible than you and you don’t
> want to admit it.
I have have a total of 4GB of Kingmax memory (I bet you can’t guess who provides the DRAM for those spectacular modules) and probably another 8GB in assorted brands. I don’t buy premium memory because it’s not important to me. If you think you’re smarter than all the more power to you, but given your ignorance, your inability to focus, your reliance on Google queries to make silly comments that don’t make any sense in this discussion, your lack of knowledge of this subject, I don’t know what you’re basing that on. It must be your Sennhisers.
Edited 2005-11-16 20:55
I haven’t slept in about thirty hours and I have the flu. I haven’t even clicked on any of your links for hours, but I’ve probably been an idiot and an asshole, since interaction in meat space has made that fairly obvious that I’m on a roll. And despite my typical insistence that people not post crap here, I sort of browse my posts and I see probably don’t meet my own standards of excellence.
You win. I’m an idiot. I probably haven’t been anywhere near as coherent as I thought. I don’t really add much value here, and I’m going to leave. I’ve probably posted a day’s worth of complete rambling nonsense. Sad, really.
I haven’t slept in about thirty hours and I have the flu. I
I hope you feel better soon. You really should get some rest.
We can always continue this later or not
The constraints of the processor don’t change the status of the brand, which is the whole point of this. The RAM shipped in the PowerMac isn’t ‘premium’ memory, it’s just common OEM memory. Do you think I’m suggesting Apple pay $50+ more for memory? No. The cost doesn’t match the benefit.
I just don’t get where you are explaining he value add of the premium memory here.
Well reliability is another matter. It isn’t any faster when used there.
Exactly so what good is the premium memory in the G5?
Your good friend BMW makes a lot of budget cars.
Really name a budget BMW branded model.
The entire computing industry isn’t desktop computers.
Now you are talking sense. I can garauntee that nobody but hobbist builders, gaming enthusiasts and overclockers use OCZ, Corsair or Mushkin DIMMs in the majority of the entire computing industry.
In fact the computing industry at large doesn’t consist of ‘premium’ products. No aggregate does. That’s not premium, that’s average. What does your watch need with 800MHz DDR2?
Finally some more sense. You got is Apple is another manufacturer in the entire computer industry and doesn’t target the same people OCZ, mushkin, Corsair, alienware, northwest falcon do. That is a very small segment of the entire industry.
Samsung, micron, Infineon make products for the rest of the industry. Products that go into servers that need 99% uptime. And hence end up in a PowerMac at cl4-4-4-12 for reliablity first and foremost.
HPC has nothing to do with this discussion. That’s not even remotely the same market.
HPC is not the only other market. Oh dear… this is going to take long. Have you heard of reliable systems for multibillion $$ buisnesses. HPC is one market out of many. There is a market for mainframes.
Look at BlueGene/L it uses 700Mhz PPC cores. The doesn’t use the fast whiz bang enthusiast rig for performance.
Buying high-performance desktop memory for anything that require reliability is retarded.
I am glad you agree. The powermac is designed for reliability hence the ability to use ECC memory. So I guess they sacrificed a little performance for it. But thier customers ( the guy that buy 10s and 100s of PowerMacs for business) sure will appreciate the thought put into reliability.
You as an indivudual probably won’t.
Though the processor concern is a little amusing. Multicore Xeons and Opterons will fill far more slots on the Top500 than the PowerMac will.
I am sure IBM openPower boxes will show up they use the same cpu. Infact the BlueGene/L is using a PPC cpu. Guess who makes the G5 aka PPC 970MP.
If you want to pay in latency, cost, and space you can implement all sorts of codes. I’m not sure what that matters to the discussion of the G5 and the Athlon64, since they don’t.
It does. Because Apple clearly chose reliability over low latency for the G5. Rayiner chose performance over reliability. Apple makes computers for millions of customers, Rayiner made one for himself. The decision making process is significantly different.
I think I’ve been clear-enough about what I’ve stated that precludes the lack of any process improvements from producing superior products. I think I’ve been perfectly clear from the onset that 1) You shouldn’t compare brands you should compare models 2) If you’re going to compare brands, then the brands considered “premium” are not Samsung, Micron, et al 3) The reason they’re considered premium are previously outlined
So by your rationale even though a value model of Corsair is a premium brand not all thier products are premium.
Ok what is the brand in Rayiner’s X2 since it isn’t really memtioned in the article. For all you know it is a no name generic brand.
I just don’t get where you are explaining he value add of the premium memory here.
The value added isn’t the point here. Mac folks like to point out that Mac hardware is of very high quality. They say it uses premium components, with the implication that it justifies their premium prices. We all seem to agree that there are no (save perhaps for the motherboard) premium components in the PowerMac. The hard drive is commodity, the fans are commodity, the RAM is, by your own admission, not premium, etc. So what’s the argument?
Now you are talking sense. I can garauntee that nobody but hobbist builders, gaming enthusiasts and overclockers use OCZ, Corsair or Mushkin DIMMs in the majority of the entire computing industry.
Boutique computer places will often use premium memory. People are always comparing Apple to boutique computer places, with the implication of similar attention to using quality parts.
Samsung, micron, Infineon make products for the rest of the industry. Products that go into servers that need 99% uptime. And hence end up in a PowerMac at cl4-4-4-12 for reliablity first and foremost.
The reliability argument is not a very good one. Premium components cost more for a reason — the chips are binned to find the ones that can handle tight tolerances. The chips are placed on low-noise PCBs with lots of layers that allow for better wire routing. Samsung doesn’t use their best chips in their generic RAM. They sell their best chips to companies like OCZ and Mushkin. If you don’t overclock them, they are still higher-quality chips that will likely be more reliable. They are still built on higher-quality PCBs, which will make them more reliable.
I am glad you agree. The powermac is designed for reliability hence the ability to use ECC memory. So I guess they sacrificed a little performance for it. But thier customers ( the guy that buy 10s and 100s of PowerMacs for business) sure will appreciate the thought put into reliability.
I’m curious. Does your PowerMac have ECC memory? No? Then why are you going on an on about it? The previous-generation PowerMacs couldn’t use ECC memory. Does that mean they weren’t professional machines?
It does. Because Apple clearly chose reliability over low latency for the G5.
There is not a trade-off here between reliability and low-latency. Premium memory can run at low latency because it uses better chips and better PCBs. These help reliability, not hurt it.
Rayiner chose performance over reliability.
Yes, I drone on and on about 10% performance differences being unnoticeable, but I buy low-latency memory for the extra 2% performance boost. Please, give me a little credit. I bought premium memory precisely for the reliability aspect. They use better chips, better PCBs, and have better warranties.
The value added isn’t the point here. Mac folks like to point out that Mac hardware is of very high quality. They say it uses premium components, with the implication that it justifies their premium prices. We all seem to agree that there are no (save perhaps for the motherboard) premium components in the PowerMac. The hard drive is commodity, the fans are commodity, the RAM is, by your own admission, not premium, etc. So what’s the argument?
Please don’t post out of context. The value add discussion was pointed at the fact that using Corsair’s high performance DIMMS in a G5 would add value. It clearly doesn’t.
Apple charges a premium because they need to make a profit. They do thier own industrial design and probably pay IBM a boatload for PPC chips and have development costs for MacOS X, ilife, final cut studio, Aperture. Which Dell,lenovo, Gateway and other manufacturers don’t. Apple also doesn’t have the volumes that Dell does to subsidize price.
Your X2 is a piss poor example of a quality machine from a manufacturer because you built it. For Apple a $1 here and there adds up to millions in volume for parts and inventory. How about you build 50 of those X2s and sell them and see what components you pick and what price you can sell it for. I can garauntee you you wouldn’t sell it for what you paid for it.
The problem is you are comparing your hobby with a business. The rules of engagement are different.
Boutique computer places will often use premium memory. People are always comparing Apple to boutique computer places, with the implication of similar attention to using quality parts.
Apple is a boutique computer place because thier industrial designs are second to none and they make industrial funtional art. People covet them and imitate them. A lambhorgini Gallardo and uses koni FSD shocks and brembo brakes does that mean it isn’t worth the price premium because you can find those same components in the market and put in your car? A lancer evo FSQ out runs or is similar to a ferrari F430 0-60 and has after market parts that could arguably considered premium race grade. Does it make the ferrari any less special?
The reliability argument is not a very good one. Premium components cost more for a reason — the chips are binned to find the ones that can handle tight tolerances.
The permium components are only binned to run at lower latencies. What good is it in a G5?
Samsung doesn’t use their best chips in their generic RAM. They sell their best chips to companies like OCZ and Mushkin. If you don’t overclock them, they are still higher-quality chips that will likely be more reliable. They are still built on higher-quality PCBs, which will make them more reliable.
Really. So a top speed bin P4 say able to clock at 3.8 ghz running @ 3.4 Ghz is a higher quality chip than a P4 of the exact same model binned to run at 3.4Ghz @3.4Ghz??!!!
I’m curious. Does your PowerMac have ECC memory? No? Then why are you going on an on about it? The previous-generation PowerMacs couldn’t use ECC memory. Does that mean they weren’t professional machines?
They were professional machines but probably lost sales in segments that requirted ECC. Apple probably wants to target those segments or got a lot of feedback from existing customers requesting the feature or both.
There is not a trade-off here between reliability and low-latency. Premium memory can run at low latency because it uses better chips and better PCBs. These help reliability, not hurt it.
Sure premium memory is found in all the highend boxes and servers, Right? I wonder why the use registered ECC memory for those applications and not Corsair, OCZ or Mushkin low latency reliable higher quality DRAM chips.
Apple has to support both types of memory. I am not aware of a memory contorller that can handler both low latency DDR2 and ECC DIMMs equally well.
I bought premium memory precisely for the reliability aspect. They use better chips, better PCBs, and have better warranties.
What brand did you buy for crying out loud? I have asked you many times already. You didn’t even mention it in the article. What is this super reliable fast memory that you keep harping about?
Edited 2005-11-16 22:39
Please don’t post out of context. The value add discussion was pointed at the fact that using Corsair’s high performance DIMMS in a G5 would add value. It clearly doesn’t.
They wouldn’t add value in terms of performance. They would add value in terms of component quality.
Apple charges a premium because they need to make a profit.
I don’t disagree with this point. However, I can’t number of times I’ve heard “Apples are more expensive than PCs because they use better parts”. I’m sure you’ve seen that in these forums. I explicitly pointed out that this was the myth I was trying the debunk. The idea that I was trying to say the G5 is a low quality machine is entirely your fabrication.
Your X2 is a piss poor example of a quality machine from a manufacturer because you built it.
Did I really say it was? I said the PowerMac’s parts were lower in quality than my X2’s. Nothing more, nothing less. Oh, and you could build a PC using premium parts. Boutique PC shops do this all the time. The simple truth is, that the quality of these boutique PCs is apparently higher than the quality of the PowerMac. That’s all I intended to point out.
A lambhorgini Gallardo and uses koni FSD shocks and brembo brakes does that mean it isn’t worth the price premium because you can find those same components in the market and put in your car?
A Lamborghini also uses very expensive Brembo brakes that cost as much as a small Kia. If the Lamborghini used the same brakes as a Ford Taurus, then we’d have a problem. And that’s exactly the case here. The PowerMac has the reputation of a Lamborghini, but uses mostly the same parts as a Ford Taurus-like Dell.
The permium components are only binned to run at lower latencies. What good is it in a G5?
All chips manufactured on the same design are not equal, due to manufacturing variability. The chips that can run at tight tolerances are higher-grade chips. Even if you run them at regular tolerances, they are still higher-grade chips that are less likely to fail.
Really. So a top speed bin P4 say able to clock at 3.8 ghz running @ 3.4 Ghz is a higher quality chip than a P4 of the exact same model binned to run at 3.4Ghz @3.4Ghz??!!!
Yes! This is, for example, why Burr Brown sells “K” grade DAC chips for a premium. Some chips on a manufacturing run just turn out better than others.
Apple has to support both types of memory. I am not aware of a memory contorller that can handler both low latency DDR2 and ECC DIMMs equally well.
The X2’s memory controller handles low-latency DDR and ECC DIMMs equally well. I see no reason why the case would be different for DDR2.
<i?What brand did you buy for crying out loud? I have asked you many times already. You didn’t even mention it in the article. [/i]
Patriot. I said it yesterday in a reply to Japail.
They wouldn’t add value in terms of performance. They would add value in terms of component quality.
If you say so. As long as the memory works and keeps working where does the qaulity show. So the colors on your display become more vivid. The mp3s sound cleaner.. where?
A Lamborghini also uses very expensive Brembo brakes that cost as much as a small Kia. If the Lamborghini used the same brakes as a Ford Taurus, then we’d have a problem. And that’s exactly the case here. The PowerMac has the reputation of a Lamborghini, but uses mostly the same parts as a Ford Taurus-like Dell.
A Lancer Evo uses the same brakes I think. The koni FSDs can be put in a jetta or 3 series Bimmer. That would be analogous to your example of the hand built X2.
All chips manufactured on the same design are not equal, due to manufacturing variability. The chips that can run at tight tolerances are higher-grade chips. Even if you run them at regular tolerances, they are still higher-grade chips that are less likely to fail.
I don’t see the point here about less likely to fail. I am guessing that all the slow bin parts intel sells rated at the slower clock rate are inferior parts. They are cheating customers then because they are more likely to fail that their high bin couterparts at the rated tolerences???
You are now speaking out of you nether regions.
Yes! This is, for example, why Burr Brown sells “K” grade DAC chips for a premium. Some chips on a manufacturing run just turn out better than others.
Let’s not get into Audio. MIT sells $30000 speaker cables. An that is another whole rathole discussion. I am an Audiophile and have spent my fair share on cabling.
The X2’s memory controller handles low-latency DDR and ECC DIMMs equally well. I see no reason why the case would be different for DDR2.
The X2s memory controller handlers ECC DIMMs… news to me.
Patriot. I said it yesterday in a reply to Japail.
Sorry missed it. Why didn’t you pick patriot for the G5 upgrade and why crucial?
If you say so. As long as the memory works and keeps working where does the qaulity show.
It’s peace of mind. If you know the memory is built better, you know that it’s likely to last longer. You know it’s not just the same generic stuff you find in every machine.
I don’t see the point here about less likely to fail. I am guessing that all the slow bin parts intel sells rated at the slower clock rate are inferior parts.
If you don’t see why a part running at the margin of its performance and with a lower quality PCB is more likely to fail, then I don’t see what I can do to explain things any better. Let me give another example though. Why will most people tell you to stay away from ECS motherboards? Functionally, they use the same northbridge as an Asus, same southbridge, etc. Yet, people pay a lot more for an Asus board. Why? Because the latter are just plain better built. They’re built on better PCBs with better wire routing. They use better capacitors and better resistors. These things make them more reliable in the long run.
The X2s memory controller handlers ECC DIMMs… news to me.
All Athlon64s and Opterons can run ECC memory. An Opteron 175 and an Athlon X2 are identical chips, they are just packaged differently.
Sorry missed it. Why didn’t you pick patriot for the G5 upgrade and why crucial?
I was worried about compatibility. I saw that all the folks on macnn.com were using Crucial in their machines, so I went with the herd.
It’s peace of mind. If you know the memory is built better, you know that it’s likely to last longer. You know it’s not just the same generic stuff you find in every machine.
Sure for peace of mind, I can understand. But percieved quality and acutal quality are never the same. I buy things for peace of mind that probably don’t make sense to many people.
If you don’t see why a part running at the margin of its performance and with a lower quality PCB is more likely to fail, then I don’t see what I can do to explain things any better.
You are assuming that the parts are running at margin. Do you atually know for a fact with a samsung DIMM, probably not. Does it give you peace of mind knowing that the premium DIMM has better components? if it does and you made a decision for peace of mind fair enough. That doesn’t mean that Samsung branded memory is not good.
Let me give another example though. Why will most people tell you to stay away from ECS motherboards? Functionally, they use the same northbridge as an Asus, same southbridge, etc. Yet, people pay a lot more for an Asus board. Why? Because the latter are just plain better built. They’re built on better PCBs with better wire routing. They use better capacitors and better resistors. These things make them more reliable in the long run.
There are more components on a mother board than on a DIMM.
I was worried about compatibility. I saw that all the folks on macnn.com were using Crucial in their machines, so I went with the herd.
Me too. Went with crucial.
Sure for peace of mind, I can understand. But percieved quality and acutal quality are never the same. I buy things for peace of mind that probably don’t make sense to many people.
Sure, it not make sense to many people. But in an expensive computer, I expect expensive components. “Cheap but just as good” doesn’t cut it. That’s precisely what people say about iPod competitors.
You are assuming that the parts are running at margin. Do you atually know for a fact with a samsung DIMM, probably not.
Given how Samsung sells RAM, it’s highly unlikely that this generic DIMM is the best from their bins. It is much more likely to be running at margin than the stuff you get from Mushkin, etc.
If it does and you made a decision for peace of mind fair enough. That doesn’t mean that Samsung branded memory is not good.
At no point did I ever say the Samsung memory isn’t good enough. I just said it wasn’t as good as the memory in my X2. Nothing less, nothing more. I remember reading a review of a new Rolls Royce, and the reviewer freaked out because he found some Volkswagen switches on the dash. Does that meant he Volkswagen switches weren’t good enough? No, it just means that the components in a product need to live up to the reputation of the manufacturer. All I wanted to do is prove that the mythos of “Apple hardware quality” has no backbone. I’m fully willing to believe that their quality-assurance is better, their integration is better, or that their customer service is better*, but I just want to stop hearing that their parts are better, because it’s mostly just standard OEM stuff. The thing is, you don’t even seem to disagree with me on this point.
* With me, however, Apple’s customer service is 0-1. They shipped my Mac with a Spanish keyboard, and though I called two weeks ago, I still have yet to receive a replacement. I’ve sent my Dell in for servicing and gotten it back quicker than this.
There are more components on a motherboard than on a DIMM.
Does that change the argument any? One of the big reasons motherboards differ in reliability is because of PCBs, signal routing, and components in the power-delivery chain. DIMMs have all of these same issues, just on a smaller scale.
Sure, it not make sense to many people. But in an expensive computer, I expect expensive components. “Cheap but just as good” doesn’t cut it. That’s precisely what people say about iPod competitors.
I don’t get your logic. Name one computer manufacturer at Apple’s level that uses Corsair, Patriot, Mushkin or OCZ modules.
Given how Samsung sells RAM, it’s highly unlikely that this generic DIMM is the best from their bins. It is much more likely to be running at margin than the stuff you get from Mushkin, etc.
I seriously doubt you can say that without test data. That is your opinion and wishful thinking I am afraid.
All I wanted to do is prove that the mythos of “Apple hardware quality” has no backbone. I’m fully willing to believe that their quality-assurance is better, their integration is better, or that their customer service is better*, but I just want to stop hearing that their parts are better, because it’s mostly just standard OEM stuff. The thing is, you don’t even seem to disagree with me on this point.
I think you will find a strong disagreement from me that the brands you call premium are any better than the standard OEM stuff.
Yes corsair might have better margins and higher speed bin parts. But to extrapolate that and claim that they some how have fairy dust sprinkled on them to make them higher quality is a leap of faith. I work in the industry and find your statement hard to believe. OEMs aren’t idiots who don’t know what they are doing.
With me, however, Apple’s customer service is 0-1. They shipped my Mac with a Spanish keyboard, and though I called two weeks ago, I still have yet to receive a replacement. I’ve sent my Dell in for servicing and gotten it back quicker than this.
I have had the exact opposite experience. The turn around for my powerbook was 3 days door-to-door.
Does that change the argument any? One of the big reasons motherboards differ in reliability is because of PCBs, signal routing, and components in the power-delivery chain. DIMMs have all of these same issues, just on a smaller scale.
Yes it does. DIMMS have one protocol and signaling to deal with . Motherboards many, not to mention the complexity of the traces.
I don’t get your logic. Name one computer manufacturer at Apple’s level that uses Corsair, Patriot, Mushkin or OCZ modules.
Alienware’s Area-51 desktops and laptops use Corsair XMS. You can say that Alienware isn’t at the level of Apple, but you’re just basically admitting that Apple’s component quality is at the level of HP, Compaq, Dell, etc. Since you don’t seem to dispute this claim, and I have made no other claims regarding component quality, why do you keep pushing the issue? Again, I never said Apple’s components were low quality. All I tried to do was try to disprove the myth that Apple uses components that are better than the commodity ones you find in Dell, HP, etc, boxes. Since you don’t seem to disagree, why keep arguing?
I seriously doubt you can say that without test data. That is your opinion and wishful thinking I am afraid.
Yes, because there is no reason to believe expensive, binned chips are better quality than the ones Samsung can’t get away with selling to third parties. Please.
I think you will find a strong disagreement from me that the brands you call premium are any better than the standard OEM stuff.
Your free to believe that, but facts are facts. Premium RAM uses better chips and better PCBs. You can argue whether this makes a difference or not, but they are objectively higher quality parts.
Yes corsair might have better margins and higher speed bin parts.
And better PCBs and better surface-mount components (resistors, etc).
But to extrapolate that and claim that they some how have fairy dust sprinkled on them to make them higher quality is a leap of faith.
No extrapolation necessary. Better chips, better PCBs = higher quality RAM.
I work in the industry and find your statement hard to believe. OEMs aren’t idiots who don’t know what they are doing.
Of course not. OEMs are, however, supremely cost-conscious. To them, a 50% increase in RAM cost isn’t worth it for a 10% (or whatever) increase in reliability. They can easily eat the returns can still come out ahead. Apple was allegedly different. Apple allegedly priced their computers higher so they could include quality components. Again, I don’t believe that myth, and you don’t seem to either, so why continue arguing? As you yourself said: Apple PCs are more expensive because Apple makes more profit on each one, for the industrial art case design, and to fund the development of Apple software. It’s not the component quality that differentiates Apple computers — and that was my only claim here.
Yes it does. DIMMS have one protocol and signaling to deal with . Motherboards many, not to mention the complexity of the traces.
It’s just a matter of scale. If low-quality PCBs are a source of problems, they’ll be a source of problems whether you use them in a motherboard or in a DIMM.
Alienware’s Area-51 desktops and laptops use Corsair XMS. You can say that Alienware isn’t at the level of Apple, but you’re just basically admitting that Apple’s component quality is at the level of HP, Compaq, Dell, etc. Since you don’t seem to dispute this claim, and I have made no other claims regarding component quality, why do you keep pushing the issue? Again, I never said Apple’s components were low quality. All I tried to do was try to disprove the myth that Apple uses components that are better than the commodity ones you find in Dell, HP, etc, boxes. Since you don’t seem to disagree, why keep arguing?
Please go ahead and compare a Alienware X2 4400+ in price to your rig. It is in order of a $1000 more expensive than your DIY one. You could argue that your component selection was superior then too. May be the box would be noisier than yours, have noisier fans, a noisier hard drive too and the keyboard wouldn’t be the same as your Cherry one.
What point are you trying to make?
Yes, because there is no reason to believe expensive, binned chips are better quality than the ones Samsung can’t get away with selling to third parties. Please.
There is no reason to beileve they are worse either. You are basing your notion of quaility on price and fancy packaging and hearsay.
Your free to believe that, but facts are facts. Premium RAM uses better chips and better PCBs. You can argue whether this makes a difference or not, but they are objectively higher quality parts.
Do you have schmoo plots proving your point that the parts from OEMs are at margin and have no headroom?
Or are you just claiming the PCB and electronics are better based on belief and how things look and feel and price?
And better PCBs and better surface-mount components (resistors, etc).
Do you have pictures with the heatspreader out on a premium DIMM to prove that point?
No extrapolation necessary. Better chips, better PCBs = higher quality RAM.
Based on belief…..
Of course not. OEMs are, however, supremely cost-conscious. To them, a 50% increase in RAM cost isn’t worth it for a 10% (or whatever) increase in reliability.
So Corsair, Patriot, OCZ live on 0.5% margins and give you higher ROI…. !!!! Please talk some sense. Corsair makes margins too probably 15-20% and because thier volumes are lot lower and prices higher, I can bet you thier margins are a lot higher than OEMS.
Again, I don’t believe that myth, and you don’t seem to either, so why continue arguing? As you yourself said: Apple PCs are more expensive because Apple makes more profit on each one, for the industrial art case design, and to fund the development of Apple software. It’s not the component quality that differentiates Apple computers — and that was my only claim here.
You can choose to believe what you want. There are people that believe placing a shiny rock on thier CD players and coloring the edges of thier CDs with a green marker makes thier sound systems give them orgasms. And they pay top dollar for those beliefs.
Since you have no real scientific proof you beliefs are just that.
Edited 2005-11-17 19:23
Please go ahead and compare a Alienware X2 4400+ in price to your rig. It is in order of a $1000 more expensive than your DIY one.
In what alternate reality? Price out an Alienware Aurora 7500. Upgrade the CPU to a 4400+, the RAM to 2GB, and the hard drive to 250GB. That comes out to $2387. Subtract $150 to account for the price difference in graphics cards, and it comes out to $2237.
What point are you trying to make?
You asked me what OEM used Corsair RAM. I told you.
There is no reason to beileve they are worse either. You are basing your notion of quaility on price and fancy packaging and hearsay.
For $2700, “not worse” doesn’t cut it.
Do you have schmoo plots proving your point that the parts from OEMs are at margin and have no headroom?
Do you have plots to prove otherwise? At least I have conventional wisdom on my side. Go to ArsTechnica’s hardware forum and ask whether anybody would recommend buying generic Samsung memory versus name-brand memory. These people could very well be wrong, but you better have some evidence before claiming so.
Based on belief.
Yes, but usually when large groups of people believe something, there is a reason for it. Personal experience suggests that buying cheap electronics rarely turns out for the best. I’ve bought enough ECS motherboards to not make that mistake again. If you have proof the contrary, I’ll be glad to see it. Barring that, I’ll go by my experience and the words of others.
So Corsair, Patriot, OCZ live on 0.5% margins and give you higher ROI…. !!!! Please talk some sense. Corsair makes margins too probably 15-20% and because thier volumes are lot lower and prices higher, I can bet you thier margins are a lot higher than OEMS.
Presumably the OEMs don’t manufacture their own RAM, and would have to source their high-quality memory from the likes of Corsair, etc. Even in those situations, they are surely not living on 0.5% margins, so the point still stands.
You can choose to believe what you want. There are people that believe placing a shiny rock on thier CD players and coloring the edges of thier CDs with a green marker makes thier sound systems give them orgasms. And they pay top dollar for those beliefs.
You’ve dodged my point completely. Even if you don’t believe that buying premium parts is worthwhile (the proverbial rock on the CD player), you still have to admit Apple’s computers don’t contain premium parts. That’s not belief, that’s fact.
In the week or so of Mac threads that came up before I wrote this article, I saw people say several times something along the lines of “at least Apple’s parts are better than what you find in your crappy Dell.” Having owned both a Dell and an Apple, I can now say, no, they’re not. That doesn’t mean Apple’s parts are bad, or Dell’s are good, but simply that Apple, in parts quality, is no different from Dell, or HP, or any other major PC OEM. That’s all I’m trying to prove.
For $2700, “not worse” doesn’t cut it.
I already the explained to you how memory is sourced. If you belive that the Apple memory is a generic Samsung Stick… I don’t know what to say. I posted the partnumber and it isn’t a model Samsung sells.
Generic means that the part is a commodity part procurable through retail channels or was already available in the market and Apple chose to use it.
From the partnumber it is clear it isn’t generic but Apple specific. You silly definition of generic not withstanding.
Do you have plots to prove otherwise? At least I have conventional wisdom on my side. Go to ArsTechnica’s hardware forum and ask whether anybody would recommend buying generic Samsung memory versus
name-brand memory.
You made a claim about low margins and the supposedly high quaility of “premium” RAM. The onus is on you to provide the proof. You claimed objectivity based on subjective metrics.
My claim is that you can’t tell either way without measurements.
Yes, but usually when large groups of people believe something, there is a reason for it.
large groups of people believed a one time that the earth was flat. Did that make it true?
Personal experience suggests that buying cheap electronics rarely turns out for the best. I’ve bought enough ECS motherboards to not make that mistake again. If you have proof the contrary, I’ll be glad to see it. Barring that, I’ll go by my experience and the words of others.
Sure you are claiming that ECS motherbaords and OEM DIMMS from Samsung, Micron, Hynix, infineon are of the same quailty.
So every PowerMac and DELL is crashing because of memory errors????
Presumably the OEMs don’t manufacture their own RAM, and would have to source their high-quality memory from the likes of Corsair, etc. Even in those situations, they are surely not living on 0.5% margins, so the point still stands.
I am sure Alienware doesn’t pay the same for the XMS DIMMS from corsair that an individual pays whe ordering from thier website. I am guessing you don’t realize the meaning of business contracts, OEM contracts and volume pricing.
Since you are no OEM and are buying retail suffice it to say that you paid a lot higher for you RAM then Alienware did for thiers.
You’ve dodged my point completely. Even if you don’t believe that buying premium parts is worthwhile (the proverbial rock on the CD player), you still have to admit Apple’s computers don’t contain premium parts. That’s not belief, that’s fact.
I would admit to it if we could come to a concensus on what a Generic component is? What a premium component is? And most importantly if the premium component is any better or just prettier?
Can you buy the exact part number for the Samsung DIMM from Samsung/ best buy/ an xyz local computer shop? If you can’t it isn’t Generic.
In the week or so of Mac threads that came up before I wrote this article, I saw people say several times something along the lines of “at least Apple’s parts are better than what you find in your crappy Dell.” Having owned both a Dell and an Apple, I can now say, no, they’re not. That doesn’t mean Apple’s parts are bad, or Dell’s are good, but simply that Apple, in parts quality, is no different from Dell, or HP, or any other major PC OEM. That’s all I’m trying to prove.
Sure your proof is based on a comparison with a home built PC with hand selected components specificly designed to meed your needs of silent computing. That is where the problem begins.
You didn’t take the latest Dell XPS system and compare it to the PowerMac.
You asked me what OEM used Corsair RAM. I told you.
Correction. Alienware used to use Corsair memory when they started in the custom PC building business. The announced a deal with corsair in 2002. Now all thier modules are self branded DIMMS. I doubt that they use Corsair anymore. I can’t find any information on who supplies DIMMS to them.
Do you have any links?
The only person with an Asinine stance is you. Define premium?
For the sake of argument — let’s define it in terms of cost. Premium RAM = RAM that costs more.
Also, I’d point out that the differences in premium DIMMs doesn’t just come from “overclocking”. They use much better PCBs with more layers (6-8) than most generic sticks (which use 4). They bin chips that will run at tight timings (CAS2 for DDR, CAS3 for DDR2), instead of the standard JEDEC timings. They also cost more.
[i]Also, I’d point out that the differences in premium DIMMs doesn’t just come from “overclocking”. They use much better PCBs with more layers (6-8) than most generic sticks (which use 4). They bin chips that will run at tight timings (CAS2 for DDR, CAS3 for DDR2), instead of the standard JEDEC timings. They also cost more.[i]
The Samsung DIMMS are CAS3 for DDR2 400 which is what the memory controller on the X2 supports. It is CAS4 for DDR2 533.
The memory controller in the X2 supports DDR400, not DDR2. The Athlon64 depends on low-latency memory — DDR2 is not a good fit for it, at least until it gets up to DDR2-667 or DDR2-800.
The fact that the Samsung DIMMs are CAS3 for DDR2-400 is irrelevant. Nobody uses DDR2-400, and its easy to get chips that run at tight timings when you lower the clock-speed like that (note, DDR2-400 implies a 100MHz memory chip — DDR-400 implies a 200MHz memory chip). High-quality DDR2 runs at CAS3 at 533 or even 667.
The memory controller in the X2 supports DDR400, not DDR2. The Athlon64 depends on low-latency memory — DDR2 is not a good fit for it, at least until it gets up to DDR2-667 or DDR2-800.
Ok fine. But then you are comparing no likes again… still doesn’t make it a valid comparison on memory DIMM quality.
Nobody uses DDR2-400, and its easy to get chips that run at tight timings when you lower the clock-speed like that (note, DDR2-400 implies a 100MHz memory chip — DDR-400 implies a 200MHz memory chip).
What are you talking about? The DRAM core frequency is the same in DDR or DDR2, the I/O buffers are clocked higher in DDR2.
The DRAM core frequency can be lower because the I/O buffers are clocked higher. DDR2 puts out 4 bits per clock cycle (using two ports). That means to get an effective transfer rate of 533 megabits/second per pin, DDR2-533 can get away with using a 133MHz DRAM core. DDR1 puts out 2 bits per clock cycle (using one port). That means to get 400 megabits/second per pin, DDR-400 needs to use a 200MHz DRAM core.
It dawns on me that for some reason you confuse a premium brand with a brand name you recognize. For instance McDonald’s is a premium brand dining experience, Micron is a premium brand RAM experience, and Sony is a premium brand consumer electronics experience. I suppose Creative Labs is a premium brand of computer audio equipment. Since you recognize Samsung from your television you think that conveys the status of being a premium brand, whereas Mushkin which you admit to knowing nothing about has a colored heat spreader and despite offering a superior product to the one you have, are not. Haha. We can buy some Samsung memory, some crappy Sony headphones, and a Big Mac and we’ll be buying premium products. Way better than those PC kids.
it dawns on me that for some reason you confuse a premium brand with a brand name you recognize. For instance McDonald’s is a premium brand dining experience, Micron is a premium brand RAM experience, and Sony is a premium brand consumer electronics experience. I suppose Creative Labs is a premium brand of computer audio equipment. Since you recognize Samsung from your television you think that conveys the status of being a premium brand, whereas
Sorry. In case you didn’t know Samsung supplies the chips that go into Corsair products.
Mushkin which you admit to knowing nothing about has a colored heat spreader and despite offering a superior product to the one you have, are not. Haha. We can buy some Samsung memory, some crappy Sony headphones, and a Big Mac and we’ll be buying premium products. Way better than those PC kids.
I know more about branding in my sleep than your would wide awake. I don’t go by brands only. BTW, I don’t have a Samsung TV or any samsung product. The single chip DLPs made me see rainbows and I returned it promptly for a 3 chip lcos TV. The Sony LCD RPTVs were fairly bad at the time. Thier qualia line is superb though. I don’t have a thing for sony. Infact no tow components in my hometheater are from the same manufacturer. I never eat at McDonalds or any fast food places and try to avoid chains for that matter. For my headphones I’ll stick with Sennhiser (best bang for the buck) can’t really afford the orpehus though.
I have some PMC speakers in my home theater, ever hear of them? I am sure you haven’t. Look them up.
Let’s see teh gamepc review of Samsung DIMMS versus Corsair.
http://www.gamepc.com/labs/view_content.asp?id=ddr2comparo&page=3
Amazingly, the Corsair XMS2 modules, which are geared towards the enthusiast markets with their integrated heat spreaders, were the only modules we could not get to run at 600 MHz speeds on our 925x platform, while the Infineon and Samsung modules were perfectly happy to run at this faster clock speed. We even tried bumping up the voltage levels and increasing the cooling modules on the XMS2 modules, but nothing worked. Asus actually lists the XMS2-533 as “compatible” in 600 MHz mode on this motherboard.
Read carefully. The term for Corsair is enthusiast not premium. It might be premium in your deluded state of mind but it is not considered premium. The samsung DIMMS even overclock better, shame.
Edited 2005-11-16 17:10
Your mad Google skills impress me. Though I share your taste in “Sennhiser [sic],” what you didn’t appear to get in my post was that I was mocking your association with brand rather than model, not seriously suggesting you think McDonald’s food is a premium product. No one is that retarded. But again, there you are making a weak argument from some random page to suggest that ‘enthusiast’ is disparate from ‘premium.’ Why don’t you just orient yourself with the communities that buy the products?
Hey, do know who fabricates the nVidia GPUs used in all of those video devices? And all of the memory used on the cards? I guess those cards are all the same. Shame.
> Quality of the fans is subjective.
One can define what the ‘quality’ of a fan is then perform measurements to ascertain the quality of fan. Actually measuring fan quality is probably easier than measuring the quality of the system memory or capacitors for the typical reviewer.
> The memory in my G5 is samsung, I’d hardly call that
> generic. Samsung is one of the leading supliers of
> memory chips.
Samsung is a company, not a model of memory. They provide a large variety of modules, but they are not a premium performance brand. Mushkin, OCZ, and Corsair are premium brands (though they have ‘value’ models). Crucial isn’t a premiun brand either, so it’s not a big improvement.
http://www.newegg.com/Product/Product.asp?Item=N82E16820147147
http://www.newegg.com/Product/Product.asp?Item=N82E16820146426
Tell me which of these is “premium” and which of these is “generic.” Don’t even look at the price, the cost isn’t the important aspect.
Thanks Rayiner I have always respected what you have written in the comments section over the last couple of years. Although I was a little confused about your comment as a GNOME user since, most of those comments were about KDE :B
Anyway thanks for this great article written by your experience having used multiple platforms.
I too like OSX menu at the top. My first experience with OSX was when I got my first Mac Mini earlier this year.
My experience was, is this interface is for noobs?! I found it difficult and confusing at first, but it is more of an interface that has grown on me and I find myself missing it when I go back to windows.
BTW I can imagine the amount of dust your G5 will collect over a year compared to your X2 machine.
I have the Sonata II case and it is blissfully quiet as long as you don’t connect the blue and black PWR connector to the motherboard and it has a dust filter at the front. I amazed how much dust it catches and mostly prevents from entering the case.
The main reason I don’t currently use a G5 Mac is the intolerable level of noise they produce. I work on audio and video, when I play something I don’t want the audio competing with fan noise. I bought fairly high end AV equipment to reduce hiss and distortion so I’m not going to put up with my computer drowning out the detail.
The G5 may not be so bad compared with generic PCs where no effort at all has gone into noise reduction, some hard disks and PSUs make a terrible noise on their own. But it’s easy to build a low noise yet high performance PC and there are companies that specialise in low noise computing. At the moment it isn’t possible to get a G5 desktop that doesn’t make far more noise than my 3 PCs put together.
If I can build a low noise dual core computer that doesn’t run hot, surely Apple’s engineers can do the same? For all it’s build quality and complexity I think the thermal design of the G5 is a bit of a disaster.
If you don’t care about noise then it’s not a problem, but the number of sites and products dedicated to noise reduction show that a lot of people do care about this. Apple are missing out on a market by making such unnecessarily noisy computers.
The new dual cores must generate more heat?
My dual 2.0 G5 is the quietest machine I have ever had, even when processing large motion and final cut, logic projects.
Every time i read i review about an Apple product it seems that the writer has always to come with some comments which are very uninformative like if those authors need to say something that they do not like on the product, which is ok, but very often it end up with not very correct comments.
If i read this,
“The PowerMac G5 is a frustrating machine. As a piece of hardware, it is inferior in almost every way to the Athlon X2 machine sitting next to it. That is not to say that it is a bad piece of hardware, but rather that it can’t match up to a very excellent machine. However, software is what counts and OS X is the platform that runs the applications I want to run. For that reason alone, the PowerMac isn’t going anywhere any time soon.”
Well ok, the thing which amazes me is “piece of hardware, it is inferior in almost every way to the Athlon X2 machine”.
Well its surprising because its wrong and unfair to say that. I mean the author wants to make people believe that the power mac G5 which has 19 thermal sensors, 9 independent fans, 4 differents thermal zones, each thermal zone is independently controled by the os, each fan is independently controled by the os, an aluminum case for easy thermal conduction and which is designed for easy flow of the air from front to back, extremely easy access to the machine interior with one door to open, ……. with all of this ……..that the hardware is inferior. Ohhh its somehow too much lack of honesty….
We have dual 2.7 ghz G5s in our lab, thats a very quite machine, quiter than any other similar dual processor pc that we have in the lab.
With this if i had this comment
“For the extra $700, the PowerMac is slower, louder, uses cheaper parts, and runs OS X. The mathematics of the situation are clear. If OS X is worth an extra $700 plus performance and silence to the buyer, the PowerMac is a good value.”
Slower? Why? To what i saw in those tests i don’t think the we can conclude that the G5 is slower, its definetly a better machine for people who have to deal with floating point performance, and i can use different tests to show that its even stonger than that. Why can’t the author qualify his conclusion? And the conclusion about the price is woooo!!
I mean hey!!! does the author compare both configurations in fairness? The G5 supports ECC memory, the X2 not, The G5 supports up to 16 GB, the X4 only 4 GB at max. So just the ECC is much than worth the 700$ difference than he is talking about. I would say that this X2 is more a gamer machine, for a pro user who need reliable machine which can handle large computational problems, the G5 is definetely better. Moreover how many Gigabit contolers does the X2 machines has (the G5 has two)? How many PCIExpress slots does it have? Are they connected to the southbridge controler (if so it can limit the throughput of any devices which is connected via PCIExpress or if the computer has to handle high network tasks, or both), or are they directly connected to the System controler like the G5 for maximum throughtput? There are lot of more things to compare to conclude about the price advantage of both machines.
What about the software. Osx comes with something that is very important if the machine has to be used in scientific research. OS X comes with out of the box powerful high perforance libraries (the Accelerate framework) that allows people to write very fast code very easily. An example. Any one who wish could get much better performance (than those obtained by the author) for FFT computation by using vDSP, which takes care of any vectorization, and yes it suports double precison FFT too. This kind of things very important for people who the G5 is aimed to, and do not come out of the box with Linux. One AMD you would to have to get their library. I used it sometimes, and believe me its slower that the Accelerate framework, and much much less intuitive to use.
So comparing to machines its not an easy thing, many, many factors come to play ib the final decision. Again in the perpective of using the two machines in a pro environment, the G5 is clearly better than a machine built wih an Athlon X2 (on amd you would have to get machines with Opterons for the same target much more expensive than a powermac G5).
My next point concerns the poor knowledge of what 64 bits means. The author wrote:
“The G5 is fastest when running 32-bit code, and the X2 is fastest when running 64-bit code. ”
That’s nonsense. Hey you are student of aerospace engineering and you dont know what 64 bits means. For users 64 bits means two things:
– 64 bits integers
– 64 bits memory adressing. That is a processor that be able to adress terobytes of memory.
Nothing else, if an os allows any process to adress more than 4 GB of memory, then the os is 64 bits. You wrote this ” Moreover, OS X and its apps are almost completely 32-bit, while Ubuntu and its apps are almost completely 64-bit.”
This sentence is meaningless, and this misunderstod of how Tigers handles 64 bits comes very often (for more information i would advice the author to read arstechnica’s Tiger article) OS X allows any process to adress more than 4GB of memory that is, up to 4 TeraBytes of physical memory and up to 16 exabytes of virtual memory. The limitation comes in as OSX only allows process that runs in the shell to adress 64 bits. Any code that has calls to the CArbon or Cocoa API can not run in 64 bits at present. Thats ok because most (actually only) people who use 64 bits are people in high performance computing and who need to write 64 bits apps that do not use a GUI in general. For apps that do, its still possible to write a 64 bits thread to handle a task that 64 bits computing and to send resultst to the app via Mach message passing. That’s an elegant solution because anyway a high end app, does not always need to adress large memory as a whole, but only for some hign end tasks that can be handled in separate threads.
So OS X is 64 bits as Ubuntu is, OS X provides 64 bits supports for people who needs it, you really do not need a 64 bits compiled mail app. You dont believe me, well an example, the last version of Mathematica 5.2 support 64 bits adressing across many os, including Tiger.
In general, the author speaks about Ubuntu 64 bits apps, those apps are generally not 64 bits, those apps are 32 bits recompiled to run on 64 bits chips in a 32 bits mode.
AMD is champion regarding the 64 bits marketing. They always come up with some test by comparing 32 bits and 64 bits apps and conclude that 64 bits chips are much faster. And the author seem to have fallen in the marketing trap. Fisrt of all when AMD talks about 64 bits apps most of their test are actually using 32 bits apps recompiled for 64 bits chips which have more registers (16 on 64 bits AMD chips, 8 on 32 bits chips).
The app is still 32 bits but performs better in the so called 64 bits modes, because simply the chip has more registers which improve the performace of 32 bits apps by a simple recompilation. But again the app have not be rewritten to adress memory in 64 bits (you need to write code to do that), so the app is not runnign in a true 64 bits mode, it still 32 bit.
Worse a french team (x86-secret.com) have discovered a few months ago that AMD in their comparison between 32 apps and 64 bits apps actually compared a 32 bits function written in C and without any optmization (this was the 32 app, this function was inside the source code of an unzip app) against a assembly highly optimized 32 bits written function (this was the so called 64 bits app). Guess who win, they made to put a lot of importance in the performance of the 64 bits, but it end up to be a huge user manipulation.
If you recompile a 32 bits app to run on a G5 well it will perform like if the G5 would be 32 bits, its normal, again because the app is still 32 bits not 64 bits.
How can you conclude that the X2 is faster running 64 bits code, you actually did not run any 64 bits tests. If you want to test both machines for performing 64 apps that deal with large date set, well you need to run some code that actually adresses more than 4 GB to run. Thats how to test 64 bits performance. You did not do so, so your statement is wrong!!!!
Thare are also many starnge commenst from the author about this quation of memory or disque. But i guess that previous people have already replied to that.
For users 64 bits means two things:
– 64 bits integers
– 64 bits memory adressing. That is a processor that be able to adress terobytes of memory.
The “user” is irrelevant here. We’re talking about benchmarks of a CPU from the perspective of a developer. On both CPUs, “64-bit mode” has a very specific meaning. On the Opteron, a 64-bit binary is code that runs with the long-mode bit enabled and uses 64-bit instruction encodings. On the G5, a 64-bit binary is code that runs with the SF bit in the MSR set to 0, and uses the regular SLB mappings.
Nothing else, if an os allows any process to adress more than 4 GB of memory, then the os is 64 bits.
The OS is 64-bits if the kernel and its libraries are compiled as 64-bit binaries, whatever that implies in the target architecture. Nothing less, nothing more.
This sentence is meaningless, and this misunderstod of how Tigers handles 64 bits comes very often (for more information i would advice the author to read arstechnica’s Tiger article) OS X allows any process to adress more than 4GB of memory that is, up to 4 TeraBytes of physical memory and up to 16 exabytes of virtual memory
You’re missing the point. The fact that OS X Tiger allows 64-bit binaries to run does not change the fact that the kernel itself is a 32-bit binary. As someone pointed out to me:
file /mach.sym gives:
mach.sym: Mach-O executable ppc
file ./nbench (which I’ve compiled in 64-bit mode) gives:
nbench: Mach-O 64-bit executable ppc64
The limitation comes in as OSX only allows process that runs in the shell to address 64 bits. Any code that has calls to the Carbon or Cocoa API can not run in 64 bits at present.
Yes, because, because Carbon and Cocoa, along with most of the OS are compiled as 32-bit binaries! 64-bit binaries cannot link to 32-bit binaries.
That’s an elegant solution because anyway a high end app
It’s hardly an elegant solution, just a decent hack until the OS X user-space can be made 64-bit clean.
So OS X is 64 bits as Ubuntu is, OS X provides 64 bits supports for people who needs it, you really do not need a 64 bits compiled mail app.
No! Every binary in Ubuntu runs in 64-bit mode. No binary, by default, in OS X runs in 64-bit mode.
In general, the author speaks about Ubuntu 64 bits apps, those apps are generally not 64 bits, those apps are 32 bits recompiled to run on 64 bits chips in a 32 bits mode.
You have absolutely no idea what you’re talking about. Source code does not have an intrinsic “bit-ness”. Whether you compile it as a 64-bit binary or a 32-bit one is what decides whether it is 64-bit. In OS X, you do this via the GCC flag: -arch ppc64. Most OS X binaries are not compiled with this flag. Instead, they run in a “32-bit mode” on the G5, with the SF bit set to 0.
The app is still 32 bits but performs better in the so called 64 bits modes, because simply the chip has more registers which improve the performace of 32 bits apps by a simple recompilation. But again the app have not be rewritten to adress memory in 64 bits (you need to write code to do that), so the app is not runnign in a true 64 bits mode, it still 32 bit.
No, the app isn’t still a 32-bit app. It uses 64-bit pointers, and can its ‘long’ type is 64-bits. That makes it a 64-bit app! And you don’t need to do any rewriting to have a C app address 64-bits, because it’s already using 64-bit pointers.
How can you conclude that the X2 is faster running 64 bits code, you actually did not run any 64 bits tests.
All the tests on the X2 were run in 64-bit mode. They all operated with in 64-bit long mode, used 64-bit pointers, and used the amd64 instruction set.
Just so we’re clear, you think that whether a program compiled for x86_64 or ppc64 uses 64-bit addressing depends on whether it actually uses more than 4GB of memory? If that’s what you think, please let me know.
Thank you Rayiner for a very good and mostly objective comparison article.
Rayner
i am telling you, take a book about computing and read it. You dont know what you are talking about, because you dont understand what 64 bits computing is for. 64 bits apps are simply useless if they do not do what they are developed for. So tell me, what is 64 bits computing for?
I know how a given app runs in a given cpu, its not by coming with “code that runs with the long-mode bit enabled and uses 64-bit instruction encodings. On the G5, a 64-bit binary is code that runs with the SF bit in the MSR set to 0, and uses the regular SLB mappings.
” that you may appear to know what tou talk about. I am developer too, if you dont understand what the compiler is doing, how the 64 bits mode is implemented in a given cpu, and what is the resulting compiled app, you miss everything.
So tell us please, what is a 64 bits app for you? Wyt an app should be 64 bits or not? Do you stand that a simple mail app is running in 64 bits, what for? Why for you an app should use 64 bits adressing?
“The OS is 64-bits if the kernel and its libraries are compiled as 64-bit binaries, whatever that implies in the target architecture. Nothing less, nothing more.
“You’re missing the point. The fact that OS X Tiger allows 64-bit binaries to run does not change the fact that the kernel itself is a 32-bit binary. As someone pointed out to me: ”
You miss the point, because you dont undertsand why an OS should provide me 64 bits computing, being user or developper, thats the same, the developer writes an app that the user uses it. If you do benchmark 64 bits apps, you have to benchmark an application that does 64 bits tasks, otherwise what are you benchmarking, …..smoke? Tell me please.
“It’s hardly an elegant solution, just a decent hack until the OS X user-space can be made 64-bit clean. ”
Yeah you dont understand the design of Tiger. Again if an OS has the facilty to provide me a way to do 64 bits computing, that is to adress large memory chunks, then i am doing running 64 bits apps. I tell you again, being developer or user, that the same definition. There is no a definition of 64 bit computings for users and another one for developpers, there is one definition for everyone. You get it?
“You’re missing the point. The fact that OS X Tiger allows 64-bit binaries to run does not change the fact that the kernel itself is a 32-bit binary. As someone pointed out to me:”
Ohh no i did not say that the OS X kernel is 64 bits, it DOES NOT NEED TO BE A 64 BITS BINARY to allow developpers to run 64 bits apps. Thats a an idea largely accepted amoung people that a kernel binaruy should be 64 bits to provides 64 bits computing. Why a hell the kernel binary itself should be 64 bits. For working the kernel does not need 64 bits computings support. What i mean is that for working a kernel does not need to adress more than 4 GB of memory, any kernel thread does not need to adress more than 4 GB of memory. In that case your os would ask you a mimimum of 4 GB of memory to run, is it the ever case?
I dont think so, any kernel today does not need to adress more than 4 GB, and it does not need to manipulate 64 integers either. Again your misunderstood comes from the fact that you dont nderstand what 64 bits computing is for? To make it simple, i need 64 bits if i need to manipulate 64 bits integers or if i need to manipulate more than 4 GB of memory, thats it, being user or developper.
“No! Every binary in Ubuntu runs in 64-bit mode. No binary, by default, in OS X runs in 64-bit mode.”
“You have absolutely no idea what you’re talking about. Source code does not have an intrinsic “bit-ness”. Whether you compile it as a 64-bit binary or a 32-bit one is what decides whether it is 64-bit. In OS X, you do this via the GCC flag: -arch ppc64. Most OS X binaries are not compiled with this flag. Instead, they run in a “32-bit mode” on the G5, with the SF bit set to 0.”
You dont know what you are talking about, my friend, because of your understanding of the question.
Ubuntu binaries do run on a kind of 64 bits modes, because 64 bits AMD chips handle any instruction as a 64 bits one, but actually they run on a 32 bits mode. Thoses apps are not 64 bits aware as they dont manipulate more than 4 GB of memory as a 64 bits running in a 64 bits mode would do. If those app would do and only do that, they wont be able to run on previous AMD 32 bits processor.
You can say me now that there is a 32 and 64 bits binary for each 32 bits and 64 bits linux, but still does not change much, we are still not using the full potential of the 64 bits computing. And here it comes why apple has a 32 bits kernel.
Having a 64 kernel of OS X would not allow to have a single os for all platforms, Apple does not want to have a kernel for each cpu architecture, thats not user friendly, and again anyway the kernel does not need to be a 64 bits binary. The resultts is that osx runs from the G3 all the way up to the G5 with one kernel.
What about apps?
You seem not to understand what -arch ppc64 does. Xcode compiler (apple GCC version) use this flag to compile a binary that indeed needs to adress memory with 64 bits adresses. But this does not prevent you to use other flags in the same time, Xcode allows to compile for G3, G4, G5, G5 64 bits all in the same universal binary. For the user thats transparent, the os will lanch the appropriate bits for the given cpu. Much more powerful and elegant than to have a separate binaries for each platform. For the users who usually do not understand the difference between cpus, thats a great technology. No need to think, what should install on my machine.
” And you don’t need to do any rewriting to have a C app address 64-bits, because it’s already using 64-bit pointers. ”
What!!!!!!! most of my apps before the G5 and still some of them that do not need 64 bits computing do not use 64 bits pointers, or declare long double variables. You are not a programmer, how can you say that? 32 bits apps (before the introduction of the 64 bits athlon or G5) code source do not need to use 64 bits pointers or long double variables, what for, they are supposed to run on 32 bits processors, why a hell you want to waste memory by doing so. My 32 bits apps have been written to run on 32 bits cpus not on 64 bits. If i figure out that one of them needs or will benefit from 64 bits computing, well i will have to change the code, in order that it uses 64 bits declaration, and in order that it is aware of manipulating large data chunks.
Do you realize that what you say is basically wrong????
“All the tests on the X2 were run in 64-bit mode. They all operated with in 64-bit long mode, used 64-bit pointers, and used the amd64 instruction set”
Did you check the source code of the apps used for the test? Did you see those apps using more than 4 GB of memory when you runned them? If not, reconsider you statement.
Have a nice day!!!!
Okay, you seem to have a very big misunderstanding of what 64-bit programs actually are. I’m having a lot of trouble following your argument, but I think I know where your problem lies. Simple experiment. Can you give me a very simple example of a 64-bit program, including these declarations you speak of?
>What!!!!!!! most of my apps before the G5 and still >some of them that do not need 64 bits computing do >not use 64 bits pointers,
WinXP X64 needs X64 device drivers i.e. 64bit pointers.
> or declare long double variables. You are not a
> programmer, how can you say that? 32 bits apps
> (before the introduction of the 64 bits athlon or
> G5) code source do not need to use 64 bits pointers > or long double variables, what for, they are
> supposed to run on 32 bits processors, why a hell
> you want to waste memory by doing so.
1. One could use 64bit integers similar to Creative Labs Emu10k’s use of 67 bit accumulator i.e. mixing multiple 24bit/32bit PCM streams.
2. Due to CISC nature (variable length instructions), X64 ISA binary doesn’t double in size compared to IA-32 binary (thus saving on SIMD register resources).
3. IA-32 Wintel applications runs on WOW64 layer (IA-32 layer).
>Thoses apps are not 64 bits aware as they dont >manipulate more than 4 GB of memory as a 64 bits >running in a 64 bits mode would do. If those app >would do and only do that, they wont be able to run >on previous AMD 32 bits processor
X64 complied programs doesn’t run on IA-32 processors btw.
> If you do benchmark 64 bits apps, you have to >benchmark an application that does 64 bits tasks, >otherwise what are you benchmarking
Refer to Cinebench X64 edition or Lightwave X64 edition benchmarks on NUMA 4GB + 4GB Opteron boxes.
Edited 2005-11-16 08:59
Even though the drives measured aren’t exactly the same the same family was tested by Anandtech.
http://anandtech.com/storage/showdoc.aspx?i=2577&p=14
WD160JS transfer 55.3dba 52.4dba idle
Barracuda 7200.8 400GB transfer 55.4 dba 54.8 idle.
Rayiner, everything but the fans you claimed to be superior in your X2 I have refuted with facts and data. Apple uses name brand DIMMS and a good quailty drive similar to the seagate.
BTW Silent PC review forums do post manufacturer specs and don’t have too many nice things to say about seagates, ecspecially the 7200.8s.
Edited 2005-11-16 08:10
Ah, that’s useful. I didn’t realize that the JS had been tested. All of the measurements seem high. How close were these noise levels measured?
You have a VERY big misuderstanding of what 64 bits programs arem not me. I asked you in my previous message what is for you a 64 bits app, what for a 64 bys app? I am still waiting for your answer…..64 bits computing is not a couple of flag in a compiler. I am waiting….
I dont have time to teach you how 64 bits works and how it is implemented on Tiger just make the effort to check
http://arstechnica.com/reviews/os/macosx-10.4.ars/4
You want an exemple of a 64 bits app, check this
http://www.wolfram.com/products/mathematica/newin52/64bit.html
Post code, dammit. Write me a 64-bit version of “factorial”, whatever you want. You seem to think there are some source code changes that differentiate 64-bit code from 32-bit code, and I’m trying to show that there aren’t. Specifically, what are these declarations you keep babbling about?
Here is at the apple developer web site a document explaining who to write 64 bits apps, and the change that developers chould consider to do if they want to migrate a 32 bits app to a 64 bits one.
Please read it carefully and try to understand it before to write your next massage. You will see that its not true to say that simple recompilation of a 32 bits app will transform it in a 64 bits.
And to make you think a little bit about your statement, here is according to the document, why you should consider making an application 64 bits:
“You should consider making your application 64-bit if:
It involves random access manipulation of “huge” data objects (at least 2 GB)
It needs concurrent access to a quantity of data that will not fit in a 32-bit address space (multi-gigabyte data modelling, for example)”
Here is why you should not migrate to 64 bits
“You do not need to transition your application to a 64-bit executable format (and in some cases, should explicitly avoid doing so) if your application:
Uses a streaming data access model
Performs 64-bit math
Is performance sensitive
Needs to run (in the same version) on 32-bit hardware
Cannot easily be separated into separate GUI and processing tasks”
Notice the “Needs to run (in the same version) on 32-bit hardware”
Do you agree with that? If yes you agree with my statement which is in line with the above definition.
http://developer.apple.com/documentation/Darwin/Conceptual/64bitPor…
You will see that its not true to say that simple recompilation of a 32 bits app will transform it in a 64 bits.
Except it will! Your links don’t show anything to the contrary.
All your commentary about “should” is irrelevant. I never said anything about whether OS X should or should not be compiled as 32-bits. I simply said that it was compiled as 32-bits, and Ubuntu was compiled as 64-bits.
>Please read it carefully and try to understand it
>before to write your next massage. You will see that
>its not true to say that simple recompilation of a >32 bits app will transform it in a 64 bits.
Refer to *AnyCPU* in WinX64’s managed code. If you build an agnostic application in Whidbey (aka MS Visual Studio 2005), C# has this as *AnyCPU*, then it will run as a 32bit application on a 32bit platform and a 64bit application on a 64bit platform.
Very good review with a good mix of objective and subjective. Hard facts were nicely blended with personal comments and thoughts. all in all, I wish that other contributors strive to reach this high standrad of writing!
I like the article. It is well written and fun to read. I agree on about every point the author raises too, except the Menubar issue. I think the way OS X handles Menubar sucks, sorry. I welcome the day applications have no menubars. Certainly bookmark material. Good job Rayiner.
Edited 2005-11-16 08:56
“Given that the majority of PowerMac buyers will be running with less than 4GB of RAM and non-ECC memory, I’d say yes, the noise factor of the system is a more important concern. To me, who has only 2.5GB in the machine and has no use for ECC, the RAM factor is completely unimportant.”
No it changes things sorry, the price has to do with the support of ECC or not, with how much ram the machine supports. And who told you that “the majority of PowerMac buyers will be running with less than 4GB of RAM and non-ECC memory”, did you ask people about that? Where did you get this info from ?
As the powermac is a machine for pro user, sure we can suspect that users will be running with more than 4GB and ECC (ECC support has been a strong wish from users according to apple).
1) It changes the pricing marginally. All Athlon64 processors support ECC (as well as 2GB DIMMs). If you really want ECC RAM, just use a different motherboard (eg: Tyan K8WE). This motherboard runs about $50 more than the one I used, but I also never took into account depreciation since I built the system — the X2 would be $100 cheaper today, due to price reductions on the 4400+ processor since I bought the machine (I got the 4400+ right when it came out).
2) The memory limitation is a function of the CPU (on which the memory controller resides), not the motherboard. Most motherboards are rated for 1GB DIMMs, because that’s the largest commonly available. However, if you use 2GB DIMMs, it should work just fine. The real limitation is the 4 RAM slots vs 8 RAM slots.
The PowerMac may be a machine for a “pro user”, but it’s also in the high-end PC segment, simply because it’s Apple’s only upgradable tower line. From what I’ve seen on the Mac forums, most people who are buying the new dual cores are opting for the 2.5GB configuration.
And where is your definition of 64 bits computing, i am waiting for it, or you just dont know, do you?
Did you read the the document that i told to read? No? Read it please, i dont need to give you some code as long as you dont undsertand what the code is for?
You want to read “some source code changes that differentiate 64-bit code from 32-bit code”, well read the doucment that i gave you, you will find dam examples there, if only you stop to be dam stubborn and accept a least to make the effort to read the information that i provide you.
You want a 64 bits version of a factorial, for what?, read the doument that i gave you or my previous message, it was written this:
“You do not need to transition your application to a 64-bit executable format (and in some cases, should explicitly avoid doing so) if your application:
……..
Performs 64-bit math
………………………………”
Read the document or dont reply to me, thats it!!!!!
http://developer.apple.com/documentation/Darwin/Conceptual/64bitPor…..
Of course I’m getting angry. Nothing you say makes any sense, and I’m damn convinced that I’m not the one who is crazy.
And where is your definition of 64 bits computing, i am waiting for it, or you just dont know, do you?
The definition of “64 bit code”: Machine code which operates on 64-bit pointers.
Did you read the the document that i told to read?
Yes. Nothing in it substantiates anything you say!
You want to read “some source code changes that differentiate 64-bit code from 32-bit code”
The document says nothing about source code changes! It simply says why you’d want to use 64-bit code.
You want a 64 bits version of a factorial, for what?
Your assertion is that you cannot have a 64-bit application just be recompiling source. You referred to “64 bit declarations”. I want to know what the hell you’re talking about.
I’m going to bed. Let me give you a simple example that you should be able to understand.
Source test.cpp:
#include <stdio.h>
int main()
{
printf(“%d
“, sizeof(int*));
return 0;
}
g++ -o test test.cpp
./test
4 <— This is 32-bit code
g++ -arch ppc64 -o test test.cpp
./test
8 <— This is 64-bit code
OS X is compiled the first way. Ubuntu is compiled the second way. Comprende?
“From what I’ve seen on the Mac forums, most people who are buying the new dual cores are opting for the 2.5GB configuration.” what? mac forums, well if you think that you know all about powermac users from forums,……wheyyyyyyy….
The point that you miss, is not about how much or RAM users are going to put in their machine, but how much the machines allows them to put. Any disussion about price is about this..
“The real limitation is the 4 RAM slots vs 8 RAM slots. “, here we go….
And yes you built the system, and just when i read that i remember one of your conclusion saying this “As a piece of hardware, it is inferior in almost every way to the Athlon X2 machine sitting next to it.”
So you are saying that you built a better computer than the powermac G5, a computer built by top class engineers that spent many hours of hard work developping it. Or maybe i miss the definition of hardware that you have. This statement is simply ridiculous and wrong, show me your machine, the layout of the components inside, i want to laugh a little bit.
Hey just come back to Earth, you are a student that assembled a computer by himself, nothing more. No way that you can say that you make better work than specialized engineers in hardware design.
The point that you miss, is not about how much or RAM users are going to put in their machine
You asked me why I though most PowerMacs would be used with less than 4GB of RAM. I told you why. Now you’re changing the subject.
Any disussion about price is about this..
Yes, if you need more than 4 RAM slots, the X2 isn’t for you. Now, for the other 90% of the world’s population, that information is irrelevant, isn’t it?
So you are saying that you built a better computer than the powermac G5, a computer built by top class engineers that spent many hours of hard work developping it.
Yep, that’d seem to be the case. Of course, I’d point out that eMachines are computers built by “top class engineers” (I have as much proof of that adjective as you do!) that spent many hours of hard work developing it.
Or maybe i miss the definition of hardware that you have.
It’s in the article.
Hey just come back to Earth, you are a student that assembled a computer by himself, nothing more. No way that you can say that you make better work than specialized engineers in hardware design.
Assembling a computer isn’t rocket science. The smarts goes into making the components, not assembling them. Component selection and assembly — that can be done by any monkey. Most of the bits that set the X2 machine apart from the G5 (the P180 case, the fans, the RAM, etc) were made by engineers who were apparently more clever than their Apple counterparts. I just performed the job of the guy who gets paid $10/hr to physically put the pieces together.
The proof of the pudding is in the eating. The X2 is much quieter, faster, has better RAM, a better keyboard, a better mouse, and a better graphics card. I don’t care how much smarts went into each design — the question is a matter of end results.
The proof of the pudding is in the eating. The X2 is much quieter, faster, has better RAM, a better keyboard, a better mouse, and a better graphics card. I don’t care how much smarts went into each design — the question is a matter of end results.
What proof? It isn’t that much faster. Your benchmarks had so many variations it is hard to pick a clear winner on the facts provided. In the end we are left to your ill informed biased conclusions.
The better RAM is debateble. There are reviews showing Samsung to be superior to Corsair even for overclocking.
Better mouse and keybaord are subjective opinions. I use my apple keyboard and see no problems using it. The might mouse is good too. My logitech keyboard and mouse are gathering dust with the athlon machine the G5 replaced. To each his own.
Everything you talk about as proof is just subjective opinion and a biased one at that. Many easily conteracted with facts from sources much more reliable than you.
Shame. I thought I could expect quality from you when I saw the article was submitted by you. Reading it and seeing how you handle criticism. I am deeply disappointed.
Edited 2005-11-16 20:19
What proof? It isn’t that much faster.
Out of N benchmarks, the G5 won 9 decisively (by more than 5%). The X2 won 23 decisively. The G5 lost 7 by a large margin (30% or more). The X2 lost 4 by a large margin. Of the 9 benchmarks that the G5 won decisively, 3 were very simplistic (LINPACK, FFTW). The G5 lost every single integer benchmark. The X2 isn’t much faster, by my generous definition of faster, but as I said, based on the results, the 970MP in the G5 hangs with chips a couple of notches down on the price scale from the 4400+ in the X2.
Your benchmarks had so many variations it is hard to pick a clear winner on the facts provided.
The benchmarks had variations because the G5 is a chip that has some obvious strengths, several glaring weaknesses, and a mediocre compiler driving it. However, it’s really not that hard to pick a winner here. Basically, the benchmarks show that if your data set fits in cache, there is little integer code in your algorithm, and GCC doesn’t mess up your code’s scheduling, the G5 can win. In nearly every other case, it loses.
The better RAM is debateble. There are reviews showing Samsung to be superior to Corsair even for overclocking.
No, it’s not. A Civic might show a better 0-60 than a Mercedes, but would anybody really argue about which one is a more “premium”?
Better mouse and keybaord are subjective opinions. I use my apple keyboard and see no problems using it. The might mouse is good too.
“Better” was probably an inappropriate term. “Higher quality” would be more appropriate. The Apple mouse and keyboard work just fine (like most keyboards and mice). They are not, however, “high quality” units. The leather in a Civic works just fine too. It’s not “high quality” like the leather in a Mercedes, however.
My logitech keyboard and mouse are gathering dust with the athlon machine the G5 replaced.
They keyboard in the X2 isn’t a Logitech. It’s a Cherry, built in Germany and imported here by a wholesaler. I’m snobby about my keyboards, and any way you slice it, the Cherry is higher quality component. Now, many people might prefer to softer feel and lighter weight of the Apple keyboard. That’s fine. That doesn’t change the fact it uses cheap membrane “key switches” and lightweight plastics.
Shame. I thought I could expect quality from you when I saw the article was submitted by you. Reading it and seeing how you handle criticism. I am deeply disappointed.
I’m sorry that you’re disappointed, but it has nothing to do with criticism. Many people criticized the article. You, on the other hand, went on a defensive crusade trying to prove to me, variously:
1) That I didn’t hear what I was actually hearing. You even went to the length of accusing me of lying about what I was hearing;
2) That there was no way to judge the quality of particular components (namely the fan and RAM), and that obviously unequal components were of equal quality;
3) That the fact that the PowerMac used lower quality components in several places is irrelevant.
I don’t have anything against criticism. Japail proves me wrong on a very regular basis on these forums. He does not, however, get paranoid and defensive and accuse me of lying.
Out of N benchmarks, the G5 won 9 decisively (by more than 5%). The X2 won 23 decisively. The G5 lost 7 by a large margin (30% or more). The X2 lost 4 by a large margin. Of the 9 benchmarks that the G5 won decisively, 3 were very simplistic (LINPACK, FFTW). The G5 lost every single integer benchmark. The X2 isn’t much faster, by my generous definition of faster, but as I said, based on the results, the 970MP in the G5 hangs with chips a couple of notches down on the price scale from the 4400+ in the X2.
May be you didn’t review your article before you wrote the conslusion. Here is a quote from the article after the BPN and SOM benchmarks.
“Interestingly enough, the G5 beats the X2 in this integer benchmark. The reason appears to be a combination of several factors. First, while this benchmark is integer-heavy, it has very few branches and consists mainly of a large number of memory accesses to multi-dimensional arrays. Since these arrays fit in cache, the G5’s slightly lower-latency L2 and 5% clock-speed advantage allow it to edge out the X2.”
Why are you contradicting your own statement?
BTW many comments on your benchmark blame GCC for not generating good PPC code. How did you conclude that the chip isn’t fast based on that.
The benchmarks had variations because the G5 is a chip that has some obvious strengths, several glaring weaknesses, and a mediocre compiler driving it. However, it’s really not that hard to pick a winner here. Basically, the benchmarks show that if your data set fits in cache, there is little integer code in your algorithm, and GCC doesn’t mess up your code’s scheduling, the G5 can win. In nearly every other case, it loses.
As I mentioned above your benchmarks are inclusive. Because GCC is the weaklink. You can’t guage performance number when you know the comiler is generating inferior code.
No, it’s not. A Civic might show a better 0-60 than a Mercedes, but would anybody really argue about which one is a more “premium”?
Sorry wrong analogy. You are comparing a tricked out civic to a stock civic when you compare samsung memory to Corsair because Samsung supplies the DRAM chips to corsair.
You could compare say a Mercedes SL 500 and an sl 55 AMG for a proper analogy. The mercedes would still be a premium car.
“Better” was probably an inappropriate term. “Higher quality” would be more appropriate. The Apple mouse and keyboard work just fine (like most keyboards and mice). They are not, however, “high quality” units. The leather in a Civic works just fine too. It’s not “high quality” like the leather in a Mercedes, however.
Higher by what standards. Leather quality can be gauged by certain metrics.How do you gauge quality of a keyboard and mouse.
I didn’t know civics had leather in thier options list.
They keyboard in the X2 isn’t a Logitech. It’s a Cherry, built in Germany and imported here by a wholesaler. I’m snobby about my keyboards, and any way you slice it, the Cherry is higher quality component. Now, many people might prefer to softer feel and lighter weight of the Apple keyboard. That’s fine. That doesn’t change the fact it uses cheap membrane “key switches” and lightweight plastics.
Again cheap according to you. What do you expect apple to ship Cherry keyboards with thier systems. Apples sells thounsands of computers. You built one machine for you and litterally cherry picked it’s components. Can you point me to one top computer manufacturer who OEMs chery keyboards.
You keep comparing apples and oranges (no pun intended). That is my problem with your ridiculous conclusion.
1) That I didn’t hear what I was actually hearing. You even went to the length of accusing me of lying about what I was hearing;
I appologize for calling you a liar. But you weren’t exactly comparing the disk drives in isolation to claim one was noisier. All your hearing says is that the WD drive in the G5 tower appears to me more noiser than the seagate in the X2. But you catagorically blamed the WD for being cheap and noisy when in fact all data says otherwise.
I am criticizing your lack of onjectivity.
2) That there was no way to judge the quality of particular components (namely the fan and RAM), and that obviously unequal components were of equal quality;
No unequal components can’t be compared. You can’t say The g5 uses higher latency DRAM than the X2 and conclude that the memory is cheaper. The fact that it has higher latency is because of the DDR2 technology not Apples component selection.
3) That the fact that the PowerMac used lower quality components in several places is irrelevant.
The point was it doesn’t have inferior components.
Let’s disect you claims of the inferior components:
Seeing as you never mentioned the brand name of memory in the x2 but went ahead to accuse the G5 as having a no name brand generic DIMMs. You were proven wrong.
Hard disk:
WD2500JS is more expensive and quieter than the Seagate drive in your x2. Somehow it is probably more audible in the G5 chasis accroding to you. That doesn’t make the WD an inferior drive.
Fans:
I really can’t say much about this. But my G5 has a low hum and ocassionly hard disk access sounds. No louder than my tivo’s disk. Infact my JVC RPTV has louder fans than the G5. Mind you my fans aren’t always at fullblast.
So what is inferior about the G5 again.
I don’t have anything against criticism. Japail proves me wrong on a very regular basis on these forums. He does not, however, get paranoid and defensive and accuse me of lying.
I appologized for calling you a liar. But I am not paranoid or defensive. You are being defensive, you have something to defend after all and aren’t seeing reason. It is irritating I must admit and this discussion has gone on far longer than I intended.
Why are you contradicting your own statement?
Apparently. Okay, fine. the G5 won decisively in 1 integer benchmark. The other tallies are correct, and my point still stands.
BTW many comments on your benchmark blame GCC for not generating good PPC code. How did you conclude that the chip isn’t fast based on that.
I never concluded that the chip isn’t fast. The performance differences could be the result of the compiler, CPU, memory, or evil computer demons. The simple fact is that you can’t run C source code on a G5. It has to be compiled with something. Almost always, that something is GCC, or occasionally Metrowerks (which is slower than GCC on SciMark and nbench, the only two I tested it with). The performance of the G5 with a good compiler is a philosophical Zen question. It’s like “what is the sound of one hand clapping?” It’s not something that is relevant to real-world usage.
As I mentioned above your benchmarks are inclusive. Because GCC is the weaklink. You can’t guage performance number when you know the comiler is generating inferior code.
I can gauge the performance of the system as a whole. Remember, this review was not a review of the PPC970MP. It was a review of the PowerMac G5, which just happens to use that chip. It also just happens to use GCC, and all its apps are compiled with GCC. Ergo, testing with something other than GCC would be pointless.
Sorry wrong analogy. You are comparing a tricked out civic to a stock civic when you compare samsung memory to Corsair because Samsung supplies the DRAM chips to corsair.
I don’t think you understand how Corsair makes memory. They don’t “overclock” the chips. They bin chips they buy from Samsung, to find the ones that run at tight tolerances. They then engineer good PCBs for the chip, and use high-quality surrounding components. A better example, perhaps, would be Toyota vs. Lexus. Yes, the same company makes them, and many parts are similar. That doesn’t mean that the Toyota is a premium automobile, or the Lexus is not.
Higher by what standards. Leather quality can be gauged by certain metrics.How do you gauge quality of a keyboard and mouse.
The physical quality of the unit. The solidity of the plastic, the presence of a rigid key plate, the quality of the key switches.
Again cheap according to you.
The fact that membrane key switches are cheap isn’t up for debate. It’s simply the nature of the technology. membrane key switches are commodity parts meant for commodity devices. Real key-switches are what you’ll find in things like arcade machines, which need to stand up to a lot of abuse for a long time.
What do you expect apple to ship Cherry keyboards with thier systems. Apples sells thounsands of computers. You built one machine for you and litterally cherry picked it’s components. Can you point me to one top computer manufacturer who OEMs chery keyboards.
I don’t really expect Apple to ship expensive, boutique components. I also don’t expect people to keep pretending as if Apple does. That is the gist of my argument Yes, PC manufacturers ship cheap, commodity, components. So does Apple. I’m not saying whether it’s right or wrong, or whether they should or not, I’m simply stating a fact, a fact that counters the idea that Apple machines aren’t just made of commodity parts!
You keep comparing apples and oranges (no pun intended). That is my problem with your ridiculous conclusion.
You seem to think that my conclusion is somehow that “Apple uses crappy hardware.” I dare you to find that conclusion anywhere in my article. My conclusion, rather, is “Apple uses the same quality of hardware Dell does”. That’s exactly what I said at the end of my paragraph on hardware quality. Apple proponents like to think that Apple machines use premium parts. I just showed that they don’t.
All your hearing says is that the WD drive in the G5 tower appears to me more noiser than the seagate in the X2.
At your suggestion, I’ve taken both hard drives out, and the WD’s is still louder, though the difference during idle isn’t noticeable. However, that’s besides the point, isn’t it? Like the whole CPU thing. Nobody uses a hard drive outside of its case. The PowerMac’s hard drive noises are still a lot more noticeable than the X2’s.
But you catagorically blamed the WD for being cheap and noisy when in fact all data says otherwise.
I apologize for calling the WD cheap, that was a Froogle-induced mistake. However, it is still noisy. The fact that you have data that suggests otherwise is irrelevant to me, who must live with the WD and Seagate I have, not the WD and Seagate other people have tested. Also, I’d point out that your conclusion is merely conjecture, based on separate reviews of the two products and manufacturers specification which have no indication of how the measurements were conducted. At no point do you have any side-by side comparison, or even a comparison against a consistent baseline.
I am criticizing your lack of onjectivity.
I prefaced my article with the fact that I wasn’t attempting to be objective! I pointed out that I don’t think its possible for an article like this to be objective. Reviews like these only make sense in the context of the state of mind of the reviewer. For example, I’m not a graphics artist. A graphics artist’s review of the G5 would likely be different. I’m a developer and an engineer who needs a reliable machine and one that doesn’t interfere with my thought process with its noise. I pointed this out explicitly in the article.
No unequal components can’t be compared. You can’t say The g5 uses higher latency DRAM than the X2 and conclude that the memory is cheaper.
I didn’t say the G5 uses higher-latency DRAM therefore it is cheaper. I said it uses generic Samsung RAM, that runs at standard JEDEC latencies, and is therefore cheaper than the lower-than-JEDEC latency Patriot memory in the X2. The 1GB Samsung DIMM you linked to cost $90. The Patriot DIMMs cost me $270 for 2GB, or about $135 for each. That’s a 50% price differential.
Seeing as you never mentioned the brand name of memory in the x2 but went ahead to accuse the G5 as having a no name brand generic DIMMs.
I have, several times in this thread. And I never said the G5 had “no name” RAM. I said it had generic RAM. Apparently, I’m not the only one who considers OEM memory to be “generic”. The Patriot memory in the X2 isn’t extravagant, but it is both better and more expensive.
So what is inferior about the G5 again.
Even if we leave the hard disk out of this, you still have the fans, the case, the cooling system (in terms of noise), the RAM, the keyboard, and the mouse. It’s motherboard is superior (like you, I have no proof of this, but I’ll give Apple the benefit of the doubt). The CPU and graphics card are probably of comparable quality. Note, that the PowerMac isn’t inferior to a Dell in any these regards (safe for the noise of the cooling system), but it isn’t exactly superior either. I never said anything to the contrary.
The most important thing here is perception. The quality of the motherboard is academic. As long as it works, its fine. The user never interacts with it directly. What is perceptible to the user is the tactile feel of the keyboard and mouse, the audibility of the system, and the quality of the monitor. The monitor was a constant, and the PowerMac loses big in the domains of how it feels and how it sounds.
The most important thing here is perception. The quality of the motherboard is academic. As long as it works, its fine. The user never interacts with it directly. What is perceptible to the user is the tactile feel of the keyboard and mouse, the audibility of the system, and the quality of the monitor. The monitor was a constant, and the PowerMac loses big in the domains of how it feels and how it sounds.
Not having felt your keyboard or heard your X2 I can’t pass judgement on how quiet it is or how it feels but i’ll take you word for it.
I do know my G5 is failry quiet. If your G5 is a quiet as mine and your X2 is quieter, I’d be pleasantly surprised.
Anyway… While you stated your opinion and I have no problem with that. Since I have no way to experience the X2 I am going to say that with your meterics I still can’t say that the X2 is a superior machine. It is quieter I’ll give you that but from a design and functionality standpoint I still think the G5 is a great machine. No Dell even comes close in ID.
That’s your opinion, but what metrics are you using to come to that judgement? I’d say mine are fairly common — noise, build quality, and performance. What are yours?
No boy, a student will not teach me what 64 bits computing is, go back to your school, you missed some classes i guess.
“The definition of “64 bit code”: Machine code which operates on 64-bit pointers. ”
That’s not what i want to read. This definition, you copy it somewhere and paste it here. This definition does not say what the application does in 64 bits mode and what a 32 bits one can not. Read the document, its is written:
“It involves random access manipulation of “huge” data objects (at least 2 GB)
It needs concurrent access to a quantity of data that will not fit in a 32-bit address space (multi-gigabyte data modelling, for example)”.
You underrstand english, right? so read it again and try to understand what it does mean.
“The document says nothing about source code changes! It simply says why you’d want to use 64-bit code.”
Really, well read the section “Major 64-Bit Changes”. In this section you will find subsections which are:
“- Tool and Compatibility Issues
– Type Size and Alignment for 64-bit Executables
– Code Impact
– Making Code 64-Bit Clean ”
Are you still sating that this document only says why you would want to use 64 bits code? I dont think so….
“Your assertion is that you cannot have a 64-bit application just be recompiling source.”
Again read the document, you will find alignment issues and issues with casting poiners to non-pointers type, among others types of issues that arise when porting to a 64 bits mode.
“#include <stdio.h>
int main()
{
printf(“%dn”, sizeof(int*));
return 0;
}
g++ -o test test.cpp
./test
4 <— This is 32-bit code
g++ -arch ppc64 -o test test.cpp
./test
8 <— This is 64-bit code
”
Woooo is what i expected, you seem to think that by passing a simple flag to a compiler will give you a 64 bits app. I am telling you again dont be stubborn, get some informations and learn.
You code ends up to be a 32 bits code running on a 64 bits cpu, nothing else. You are not doing any 64 bits computation here. How can you think that this peace of code has anything of 64 bits, you are just here manipulating a int, which is a 32 bits declariation for an integer. You even dont understand basic C/C++ declaration, great!!!
Even by giving the -arch ppc64 flag, the compiler will still deal with a 32 bits declaration. You are not writing a 64 bits app here, the print out of this code on any machine, 64 bis or not, is still 4n (the size of a int 4 bytes or 32 bits), with the -ppc64 flag or not.
If you want to declare a 64 bits integers, you need to pass a long to sizeof, providing that the system support the LP64 model for data type size.
Mac OS X Tiger uses two data models, ILP32 (in which integers, long integers, and pointers are 32-bit quantities) and LP64 (in which integers are 32-bit quantities and long integers and pointers are 64-bit quantities).
If you pass long and runs this program on a system using only ILP32, then you still get 4n (the size of a long on ILP32 is 4 bytes or 32 bits) as print out result. If you run this program with ONLY the -ppc64 flag (in the definition of the target architectures in Xcode wich say to the compiler to produce a unique 64 bits binaries for ppc970 in 64 bits mode) on a G4, it wont run ((how could it run, the G4 does not have a 64 bits integer unit, it has 32 bits integer unit). If you run the program with the -ppc64 flag on a G5 (then it says to use LP64 insdead of ILP32), it will give you 8n (the size of a long in LP64 that is, 8 bytes or 64 bits).
Comprende, wakarimasuka, do you understand, comprend tu, zu begreifen?
Yes you really dont know what you are talking about, i tell it to you again….
Edited 2005-11-16 11:02
You sir really don’t get it.
Compiling a program in 64 bit mode is what makes it a 64 bit program, claiming anything else just makes you seem foolish. Compiling the code in 64 bit mode will make the cpu execute it in 64 bit mode.
Now, did you notice that little * part of his sizeof? He is getting the size of a pointer to an int, not an int. So he proved that in one mode, the pointer size is 4 bytes and in the other mode it was 8 bytes.
The fact is, that passing a different architecture flag to the compiler makes it generate different code. That you are able to write code that only work in 64 bit mode is irrelevant, just as it is irrelevant that you can compile code in 64 bit mode that doesnt need a 64 bit address space. Hell, most of my programs could be compiled in 16 bit, does that mean that it isn’t a 32 bit application too once compiled?
I’d like to add that the AMD64 platform has additional GPRs in its ISA but only when run in 64-bit mode, which is one reason that the same program compiled in for AMD64 can run faster than when compiled in 32-bit mode.
Anyway, Hakime, instead of trying to debunk real-world results with a half-baked theory, why don’t you just RTFA? It won’t kill you. I promise.
Don’t worry too much about these trolls, Rayiner: the glass castle which the Apple sycophants have built to chuck stones at the PC world began to buckle the moment Apple announced the switch to Intel. It’s only a matter of time now. Besides, have you ever heard a metal frame twist and crumple under a heavy strain? It makes the most horrible whining and squealing noises. 😉
Don’t worry too much about these trolls, Rayiner: the glass castle which the Apple sycophants have built to chuck stones at the PC world began to buckle the moment Apple announced the switch to Intel. It’s only a matter of time now. Besides, have you ever heard a metal frame twist and crumple under a heavy strain? It makes the most horrible whining and squealing noises. 😉
Oh so just because the CPU vendor changed an Apple computer stops being an Apple computer. So Apple must’nt be an apple any more since they dumped the Motorola 6800 for the PowerPC.
I recently bought a PowerMac G5 and I am waiting for the intel based PowerBook to replace my 15″ AL Powerbook G4. Sorry there are many Apple fans looking forward to the transition. Then the silly PC fanboys won’t have anything to complain about, but wait they still will. They will complain about the fans.
No boy, a student will not teach me what 64 bits computing is, go back to your school, you missed some classes i guess.
A student is sitting here laughing at your utter lack of comprehension.
That’s not what i want to read. This definition, you copy it somewhere and paste it here. This definition does not say what the application does in 64 bits mode and what a 32 bits one can not.
A 64-bit application can manipulate addresses that are larger than the ones a 32-bit program can manipulate. It does not have to. That is the only difference (at a program level — there are more at the CPU level).
You underrstand english, right? so read it again and try to understand what it does mean.
You’re accusing me of not understanding English. You’re joking, right?
Again read the document, you will find alignment issues and issues with casting poiners to non-pointers type, among others types of issues that arise when porting to a 64 bits mode.
Of course. If your code isn’t 64-bit clean, then you’ll have to make source code changes. That isn’t what you were talking about. You were talking about “64-bit declarations” and “long double”. What the heck do you mean by those?
Woooo is what i expected, you seem to think that by passing a simple flag to a compiler will give you a 64 bits app. I am telling you again dont be stubborn, get some informations and learn.
You code ends up to be a 32 bits code running on a 64 bits cpu, nothing else.
What the bloody hell is wrong with you? 64-bit code = code that uses 64-bit pointers. It’s a very simple definition that’s very simple for a programmer to understand. Why in god’s name are you trying to contort it so? The 64-bit code in the example above is pure 64-bit code. It runs with the SF bit enabled. If it is asked to process > 4GB of data, it can. It’s all very simple.
You are not doing any 64 bits computation here. How can you think that this peace of code has anything of 64 bits, you are just here manipulating a int, which is a 32 bits declariation for an integer. You even dont understand basic C/C++ declaration, great!!!
It’s a pointer to an integer, you twit.
Even by giving the -arch ppc64 flag, the compiler will still deal with a 32 bits declaration. You are not writing a 64 bits app here, the print out of this code on any machine, 64 bis or not, is still 4n (the size of a int 4 bytes or 32 bits), with the -ppc64 flag or not.
Copy and paste the code and run it yourself you smug little bastard.
Juste one thing that i want to add, almost all UNIX and Linux implementations use LP64.
Again the same commens that come and come. Mach is not implemented as a microkernel is osx. Darwin is not a micro-kernel, xnu is monolithic. Since macos x was not intended to work as a multi-server. and a crash of the BSD server was equivalent to a system crash from a user perspective, the advantage of protecting Mach from BSD were negligible. Rather than simple collocation, message passing was short circuited by having BSD directly call Mach functions. The abstractions are maintained within the kernel at the source level, but the kernel is in fact monolithic.
Edited 2005-11-16 11:34
Sorry if this problem mentioned before in comments or next pages of the article. as far as i know loud fans problem is because of Mac OS X 10.4.3 update! take a look at this page:
http://macintouch.com/tiger34.html
Good Luck
Edited 2005-11-16 13:48
Very nice article Rayner. I like your style. Too bad this discussion is ruined by idiot(Hakime have no clue whats his talking about).
Seagate and WD are both good hardrives. Antec P180 is a nice case. The way how hardrives are attached to case is very silent. I don’t know how Apple attach hardrives to case, but this might be why Seagate seems more silent than WD.
Sorry my English sucks.
Rayiner, just one question about this. Did you add up the prices of what you paid for components to build the X2, then assume Apple would pay the same, but would need a markup, and so added 20% to get an X2 price to compare with the G5 price, and thus arrive at the “value difference” to be made up by software and intangibles?
Because if so, that’s not right, is it? Apple would buy your components for a lot less, two thirds or maybe one half what you paid. The margin would already be there, in the bulk buy discount.
Yes, I added up what I paid. I also didn’t factor in the fact that the X2 is several months older than the G5. Several parts of the X2’s parts are cheaper now than they were (specifically the processor and hard drive, which together would take $150 or so off the system cost).
So in the real world, the price comparison would actually be much more unfavourable to Apple. Or, put it another way, if Apple were putting together your X2 and selling at industry standard margins, it would probably retail for no more than what you paid for your components, and probably less.
This was what I concluded from comparing stuff from Evesham and Macwarehouse a couple of weeks ago.
I’m not saying this means that the Mac package is a bad buy, but we do need to get clear about the facts. If only to make the whole discussion more good humoured! Joke! Joke!
As for the insulting stuff that has featured on this thread recently, don’t rise to it. It does not help to descend to their level, and the article stands on its own as a good piece of work.
Can I ask one other thing? What exactly is the difference in display rendering between OSX and Xorg and Windows? Is there any reason to think one is significantly superior to the other in terms of font display, icon definition and so on?
Can I ask one other thing? What exactly is the difference in display rendering between OSX and Xorg and Windows? Is there any reason to think one is significantly superior to the other in terms of font display, icon definition and so on?
Display rendering seems about the same, but I’m on a flat panel (DVI), so the video card drivers are taken out of the equation. The Mac gamma of 1.8 is a bit bright for my monitor (native gamma around 2.0), so I use the SRGB profile instead. Fonts in OS X, as I said, are fuzzier but more “elegant”, and anti-aliased widgets make everything a bit software. The main thing in OS X, in terms of display, that’s really superior is the totally flicker and tear free window movement and resizing.
The spatial finder is what Mac Os 9 is famous for and coveted by John Siracusa of Arstechnica. You can get this behavior from the Os X Finder by clicking on the widget on the top right. You can adjust other behaviors to model Os 9 under Finder preferences. The Finder does other things of Os 9 most people may not be aware of. For instance, if you click on the title of the Finder while engaging the ‘command’ or ‘apple’ key, you can see the hierarchy or levels and you can navigate back. There are many gems of this sort in the Mac Os and one is constantly amazed at how well Os X has retained many of the fine attributes of Os 9 while excelling in other aspects, such as stability and aesthetics.
Regarding perceived value and reliability. I have owned 3 Macs over the past 12 years or so. I only had to take the LC475 to have the battery replaced once. Subsequently, I got the courage to do it myself. I have never called Tech Support. I had to change the head phones of my iPod mini-I did it over the internet without any hassle.
Another way to get peace of mind is to get extended care. I did this for my most recent Powerbook (2004).
One can always build a better PC for less money (most folk are not as knowledgeable as the author and wouldn’t be able to tell a RAM from a hole in the ground). Thus for majority of folk, Apple makes reliable computers (1 in 3 Dell machines are apparently recalled-much fewer for Macs).
Finally, I buy the Mac for its Os and the applications that run on it (some of the best applications run in Classic and I am yet to see one that is as good as those in Os X or in Windows). I don’t mind paying a premium for the peace of mind and the reliability of a workhorse. The Mac has always come through for me when I am under time constraints or giving presentations.
I also have an IBM ThinkPad Express. I never hook this up to the internet to do shopping or banking. Would not even dare, with all the malware/spyware and trojans running around. While it is a nice looking machine, it does not beckon me like my PowerBook does. It is hard to explain, but the Mac is less annoying. Every time I log on to the ThinkPad, I get these dialog boxes in my face-time to upgrade Norton, I have connected a USB device, do I want to find a driver for it-sorry, cannot find driver. It looks like you would like to connect to the internet. Can I help? After many clicks, unable to make connections. Basically, PCs drive me crazy and I try and use them as sparingly as possible. One has to use a Mac for a while to really get it (e.g. Anand of AnandTech)-but it may not be for every one.
Finally all this nonsense about hardware differences will be moot quite soon. We can then see which OS does the most for the end user in terms of not getting in the way of accomplishing tasks-which is what a computer is for.
Nicely written article, although some may not agree on everything the author opined about.
Cheers
The ferocity of the objections makes absolutely no sense to me. Rayiner has just tested two configurations of machine, and has reasonably well shown that the X2 is a little better. It is also quite a bit cheaper. Rayiner has also showed that the components used are not premium, and generally, that there is no reason to think Apple hardware is in any way higher quality than that available from decent mass market suppliers.
A couple of weeks ago I showed that you if you go to a main stream quality supplier in the UK, what you get in the way of an X2 system is about 300 sterling cheaper than the equivalent G5, but what you get for it is two hard drives, not one, dual graphics cards and higher end ones, not one, a 19 inch LCD monitor not included with the G5, speakers, not included with the G5, and 1G memory instead of 512M.
There is no doubt about it. Whether it is worth it or not, you pay more if you buy a Mac. How much more? About half as much again and sometimes double. Now why everyone should get so emotional about this, and why they should all act so surprised, is a complete mystery. This is one of the great obvious truths of our time.
Just all calm down and say to yourselves, it is true, but I still want one. That’s all you have to say. We all want things. They don’t have to be cheaper. How many times have you bought Levis instead of an equally well made off brand? You just want it. You are not wrong. Stop shouting at the guy who points out that Levis and the off brand use the same denim. He is right, but its not your problem.
Of course, if you want to go around feeling superior to the rest of the world because you have bought product X, and now no longer can, that is a problem. But its YOUR problem, not his. Deal with it.
The ferocity of the objections makes absolutely no sense to me. Rayiner has just tested two configurations of machine, and has reasonably well shown that the X2 is a little better. It is also quite a bit cheaper. Rayiner has also showed that the components used are not premium, and generally, that there is no reason to think Apple hardware is in any way higher quality than that available from decent mass market suppliers.
Try configuring a similar system from Alienware using the X2 4400+. The price is $2700. It has a better graphics card.
So Rayiners configuration from any vendor is always going to be more expensive than a DIY home system. Almost anything DIY will be cheaper , speakers, performance mods to cars, oil changes for you car, food …. you name it.
I just don’t get the logic of this comparison.
Go have a look. This is the Evesham one in the UK – a pretty well respected, value but quality supplier:
http://www.evesham.com/PCs/Info.asp?e=AD71C5B2-CD64-49BD-8DBD-8932A…
Then you can head on over to macwarehouse.co.uk and check out prices on the G5.
Rayiner’s point was, if I have comparably specified systems, does one perform notably better than the other? Is there evidence of better quality construction in one rather than the other? And secondarily, how do prices compare.
What he seems to have found is that similar spec machines, one G5 running Tiger, the other X2 running Ubuntu/Gnome, the X2 performs a bit better. He has looked at the internals and found no particularly remarkable quality in the G5, except for the case, which he agrees (my words) is a wonder of modern metal bashing.
Then, he’s looked at value, added up his bits and found the result cheaper. Now, you have to agree, adding up the bits is not particularly fair – I think it actually favours Apple in the comparison – which is why I checked out Evesham. My conclusion was, at least in the UK, you are paying more for Apple, and getting less. A lot less. Less memory, less drive, less graphics, and less flat screen. Oh, and less speakers.
Not that this is the end of the world, not that this makes Apple an awful company, or people who buy Apple stupid. No, but it is a fact.
Rayiner’s point was, if I have comparably specified systems, does one perform notably better than the other? Is there evidence of better quality construction in one rather than the other? And secondarily, how do prices compare.
Unfortunately Rayiner never procured the X2 from a vendor he built it himself. So the point is moot and so is the comparison.
Hand picking components will always garauntee you will recieve the components you like and meet your expectation. Yes? It doesn’t matter if he picked Eversham or Alienware and at any price. Raynier would have found something to hate about them. Possibly the same things he hated about the G5 because he didn’t pick the components.
I am sure niether of the manufactures sell the exact same case, fans, harddrive, keybaord, mouse as the one Rayiner picked. So the dicsussion is baseless.
What he seems to have found is that similar spec machines, one G5 running Tiger, the other X2 running Ubuntu/Gnome, the X2 performs a bit better. He has looked at the internals and found no particularly remarkable quality in the G5, except for the case, which he agrees (my words) is a wonder of modern metal bashing.
My point is he would have found the same unremarkable qualities buying a X2 4400+ machine from any vendor take makes more than one machine (not custom assemblers of course) and sells them for a profit.
I am not sure how to explain such an obvious point any better.
You build something by picking things you like you will almost always like the whole package. If you bought something ready made chances are you will find something to dislike about it.
A custom taliored suit, tailored to you specifications will always fit and feel better than a readymade one from say Armani, Lanvin or Brioni. And chances are it will cost you less. A turbocharged 330i pushing 400+ hp with the after market shocks tuned to your liking with tires of your liking will always feel/drive better than a stock M3 with factory everything according to you.
Not that this is the end of the world, not that this makes Apple an awful company, or people who buy Apple stupid. No, but it is a fact.
No it is not the end of the world. But the comparison isn’t fair for the reasons I have outlined above.
By all means if this revew compared two vendors’ products. You wouldn’t have heard a peep from me. But to compare a DIY PC with hand picked components specifically know to be silent and complainig that a ready made product isn’t as silent seems unecessary.
Edited 2005-11-17 20:14
My point is he would have found the same unremarkable qualities buying a X2 4400+ machine from any vendor take makes more than one machine (not custom assemblers of course) and sells them for a profit.
I am not sure how to explain such an obvious point any better.
You build something by picking things you like you will almost always like the whole package. If you bought something ready made chances are you will find something to dislike about it.
This is maybe not doing the subject justice.
First, noise is a known problem on the G5. It is not that he picked a superquiet set of components, and found them quieter. It is that the G5 is noisy, period, and he gives very good reasons in terms of components why this is. You can tell that noise is a problem from the Mac newsgroups. I think this is remarkable, considering that in the past Apple has taken serious risks with overheating to avoid noise – as with the Cube, and noise is very important to the sensitive and creative people who are Apple’s target market. So, it doesn’t meet his point to argue that a home built system is quieter. To meet his point, you will have to show that the G5 is not noisy. Seems a hopeless case.
In terms of components, yes, you’re right. He used pretty standard, branded, components. He would have discovered roughly the same components in most PC manufacturers’ hardware. With some arguable shortcuts, these he also found in the G5. But that was the point. We are always hearing about the superior quality of the Apple hardware. What he has shown (and what lots of other people have shown – in fact, what anyone who has opened the case knows) is that the hardware is not superior quality. It does indeed consist of the same stuff that goes into all PCs. But this is what he is pointing out.
Finally, he showed that an X2 roughly comparably configured is of slightly superior performance. Whether his system was home assembled or not is immaterial. Any roughly similarly configured X2, and his was not very special, would show the same, regardless of who put it together. This is interesting because we have always been told that some mystical qualities of the PPC processor and Apple main board somehow made it perform above its weight. He shows that this is just false.
He then goes into the question of value, and I agree that here he is on weaker ground. His approach has probably led him to understate the Apple value gap. I think that buying a bunch of components at retail is likely to lead to higher costs than buying the same set of components packaged by a supplier. They buy in bulk, you buy them one at a time, and labour is a very small part of the cost. This is the reverse of what you argue, and why your argument seems dubious on this point. This is why I looked at Evesham for a comparison, and Evesham does seem to prove that for less money you get an enormous amount more computer. Very roughly, at the high end, buying a given spec from Apple seems to about double the price as against buying it from a known good quality supplier. At least here in Europe.
The Mini is a special case: that form factor at that performance level is simply not obtainable elsewhere. So you get into a different question with the Mini: do you want to take the performance hit for the form factor? Different issue. I posted some material on what the performance hit in terms of spec was likely to be, and it is quite large, but the form factor makes it apples and oranges.
This is all about THE FACTS.
Now we come to explanations and justifications, and everyone defending Apple’s higher prices may be right. This is not to me the interesting part. Yes, maybe they do spend more on R&D. Yes, maybe they do have lower market share than Dell. Yes, maybe the OS is better, or at least, maybe some people like it better. Yes, the cases are absolutely beautiful. I like the orange stitching on my Levis, too. This is not the point. The point is, to have Apple people, in their advocacy, stop misrepresenting THE FACTS about the platform.
Some of us are old enough to remember the debates about the former Soviet Union. The Ukraine famine, the purges. There were two distinct issues. First, what happened, the facts? Second, were there excuses and explanations? You noticed though, as with Apple, that very few of those impressed with the excuses were very eager to recognise “the facts”. And with similar reasons – the facts really were not very palatable and once admitted, the justifications seemed, well, a bit thin somehow.
First, noise is a known problem on the G5. It is not that he picked a superquiet set of components, and found them quieter. It is that the G5 is noisy, period, and he gives very good reasons in terms of components why this is. You can tell that noise is a problem from the Mac newsgroups.
I suggest you reread the mac forums. Every report on the new Dual Core G5s is that it is a quiet machine. The earlier G5s had issues with noise because of faulty components that Apple has take steps to fix or replace in service. You can’t take past issues and somehow expect them to hold true in the present.
To meet his point, you will have to show that the G5 is not noisy. Seems a hopeless case.
I have shown it to be so. I posted many comments from macnn forums the same thread he used to prove his point. I posted 4 responses for a quiet machine for his 1. Even the 1 person rayiner found claimed the G5 was a quiet machine after he had it for a week.
BTW, rayiner didn’t show the g5 was noiser by any measurements. I own a DC G5 and I know how much noise it makes. Some one would have to be insanely anal about PC noise to find it noisy. I have heard many PCs and Sun wokrstations by far the G5 has been the quietest. Probably the only other quieter machine is my powerbook. The G5 is quieter than most Dell laptops and my Dad’s Think pad R52 when thier fans are running. So far nothing I have run on the G5 has made the fans really sping fullblast. h2.64 HD videos, DVD to Divx compression nothing has made a different to the acoustics of the machine in terms of fan noise. You will find similar experiences on the mac forums.
In terms of components, yes, you’re right. He used pretty standard, branded, components. He would have discovered roughly the same components in most PC manufacturers’ hardware. With some arguable shortcuts, these he also found in the G5. But that was the point. We are always hearing about the superior quality of the Apple hardware. What he has shown (and what lots of other people have shown – in fact, what anyone who has opened the case knows) is that the hardware is not superior quality. It does indeed consist of the same stuff that goes into all PCs. But this is what he is pointing out.
Let me make one thing clear. Ther are no premium brands in computing compontents. Corsairs own CEO admits thier market is Gamming and ethusiasts. If peoplem confuse market segments to mean premium That’s not my problem. A premium DIMM better have diamond studded Platiunum plated heatspreaders for Sean “diddy” combs. Let’s not confuse DIMMS selceted to be overclocked to be premium. I could always by higher clockrated CPU and run them at lower clockspeed so I can get a better nights sleep with a false sense of security and brag about how my CPU is a premium product because it could run so much faster because it is from a higher speed bin.
Those are the ridiculous points that were made about so called “premium” components.
Apples ID is premium , no other manufacuter uses Aluminum like Apple does or pays attention to the way thier machine looks inside and outside. It is kind of like looking under the hood of a corolla and VW jetta. Corrolla’s just have the engine the jetta has a nice pretty engine cover and everything looks nice.
Finally, he showed that an X2 roughly comparably configured is of slightly superior performance. Whether his system was home assembled or not is immaterial.
It is important. I showed that comparable Alienware X2 PC with 512mb ram is $2300. It is considered a noisy macine even with thier water cooling just read on silentpc review forums. All of Rayiners points about the G5 having inferior components would equally apply to the Ailenware box. It too is considered a premium machine from a premium manufacuterer.
I think that buying a bunch of components at retail is likely to lead to higher costs than buying the same set of components packaged by a supplier. They buy in bulk, you buy them one at a time, and labour is a very small part of the cost. This is the reverse of what you argue, and why your argument seems dubious on this point.
Rayiner lives in the US and so do I. European comparisons hold no meaning to this discussion because the variables aren’t same. I have no clue what kind of duty Apple pays to sell thier goods in the UK and eversham being a UK based company probably doesn’t pay thiose same duties. How about you compare another US manufacturers box in the UK to the evesham one chances are the results are the same.
I porved my point that buying a comparable box from a reputable company will be more expensive than building it at home with the Alienware example. Rayiner confirmed it was $500-$600 more.
Now we come to explanations and justifications, and everyone defending Apple’s higher prices may be right. This is not to me the interesting part. Yes, maybe they do spend more on R&D. Yes, maybe they do have lower market share than Dell. Yes, maybe the OS is better, or at least, maybe some people like it better. Yes, the cases are absolutely beautiful. I like the orange stitching on my Levis, too. This is not the point. The point is, to have Apple people, in their advocacy, stop misrepresenting THE FACTS about the platform.
There is no question they have a smaller market share than Dell. No other comapny has more market share than Dell in the PC business.
That are too many maybes. Apples are expensive but you can’t compare them to PCs with intel processors until Apple starts selling them with intel cpus. Even then you can’t compare them with the X2 because AMD has lower prices and higher performance than intel. What are you willing to bet that if the G5 was replaced by a Dell XPS dual core machine the conclusion would still hold true?
Wha? How’d you get $2700? I get $2200 when I try it, and it has a 6800GT to boot, which is a good $150 more than the 6600GT in my machine. When you subtract that — you get just a bit over $2000, precisely what I estimated!
Wha? How’d you get $2700? I get $2200 when I try it, and it has a 6800GT to boot, which is a good $150 more than the 6600GT in my machine. When you subtract that — you get just a bit over $2000, precisely what I estimated!
My mistake. I was fooling around with thier configurator and added some extras. The last price stuck in my head.
But the price I quoted included a silent cooling system. Which would bring it closer to you X2.
The price is $2300+ with that.
Now here is a question, you had issues with the powermac G5
1. Because it wasn’t as silent as the X2 you built.
Would the Alienware’s box that cost $500-$600 more than yours have the $10 a piece fan and an eqaully damped case?
2. The harddrive was noisy.
Would the alienware box come with a quieter drive?
3. The keyboard and mouse were not upto your standards.
Would the Alienware 7500 come with better a keyboard
and mouse?
I think it’s funny that all the PC fan boys are _stuck_ on this idea of Apple using “superior” components and are doing their absolute best to paint Apple as a company that uses “generic” parts…
I thought that Apple was closed and proprietary?! Hasn’t that been your war drum of the last 20 odd years? Certainly, other than the motherboard and processor, they use “standard” PC internals. Not, “generic” but “standard.” There is a fricking difference people! The best parts for any job are the parts that work as specified and don’t break. Get a grip, you sound like a broken record.
And all this ranting about “mac fan boys” say this and that… It sounds more to me like you’ve got beef with them and are doing your best to denigrate Apple just to get their goat. Childish, really.
The market share for Macs in the US is about 4.5% and that of Dell is around 16%. Majority of PCs sold are of the no-name/white box variety. Apple spends a greater percentage of its profit developing new designs and investing in new technologies than other PC makers. For e.g., they were the first to bring to the consumer market what is now known as Wi-Fi.
Apple makes a healthy profit on its computers -around 27% which is more than Dell does (but Dell only slaps together parts and puts a label on it). Apple is a profitable company. But their strategy must change with their move to Intel for reasons clear from the acrimonious debate in this forum. They may have to charge less, but may be not. Just as Sony charges more for its laptops than Averatec and Panasonic Toughbook is more expensive than the cheaper Wintel laptops, Apple may be able to continue to price their products somewhat higher than the cheapest PC. Not everyone who buys PCs buys the cheapest going only by specifications. People do pay more for less-weight (for laptops), aesthetics and reliability.
We will see shortly how Apple will deal with this tricky situation when they start selling Mac/Intels in a couple of months. iBooks are rumored to be the first Intel inside products soon to be released.
Its going to be fun and the real debates will occur then.
What we need is a comparison of X2 detailed by the author with one of the boxes from Dell or Sony priced online. How cheap are those in comparison?
Cheers
I work for a computer manufacturer and the way we source memory is a lot like how the so called premium memory makers do.
We usually have two or three vendors DIMMS of all the sizes we want to offer. Them we run schmoo plots to see how each of them behaves at different speeds and how much head room we have. We also do rigorous memory tests and error injection tests for ECC memory and enviromental tests in an Oven and cooler. We usually source DIMMS that exceed our specs so that we have enough headroom for real world environments.
I am sure Apples does something similar. If you notice the part number I posted. It is different from the stock ones on the Samsung page I posted. Which tells me that those DIMMS were manufactured to Apples specs.
The ones Dell gets made for them meet thier specs. Now how much headroom each of the specs DELL or Apple have for thier respective DIMMs is hard to tell without knowing thier policies.
You will most certainly find a huge varition in even OEM’ed memory depending on the manufacturers standards. Your guess is as good as mine if a Samsung or Micron DIMM you find in a Dell is the same as the one in an Apple. But chances are more likely than not that they won’t be exactly the same.
Sometimes real world experience helps.
Noise is not a 6th-power function of velocity, it goes like the 8th power of velocity which is very well known in the field of aeroacoustics. Also, I found the article to be kind of like a “pretend” review with some “toy” benchmarks. How about evaluating some heavy hitting engineering benchmarks that are used in the real world? You could at least gain some credibility, and add some expertise to the review by testing a computational fluid dynamics code, for instance. Testing minor benchmarks doesn’t really give an indication of real world performance. To me, this is just another case where anybody can come along and write a review that gets posted on the internet. I just don’t feel a lot of credibility or expertise here. The review is wordy and painstakingly composed, but not in a good way. I’d rather see relevant facts.
I think heat dissipation has been a big problem for the G5. Thus, the big heat sinks and the noise. This problem may well disappear after the dreaded switch to Intel. I am concerned performance under Intel may suck, since that is such an obsolete chip architecture, but, since compilers for Intel are more mature, it may be a wash. Using realistic benchmarks, I expect OSX on Intel to be faster than XP; though, except there will be less of a gap initially.