I was reading this morning an editorial at BusinessWeek regarding Apple being “too cool” but not delivering new PowerMacs. I have heard that the G4 CPUs are already close to their limit regarding the speeds they can deliver. In the past I wrote an editorial regarding Apple creating “Macs based on x86/Opteron” but seeing Apple staying faithful to the Motorola CPUs, could the development of multi-CPU Macs could be a (temporary) answer to the G4 speed limit (especially when the G5 is nowhere to be seen)?So, the idea would be to create machines that would hold 4 and 8 CPUs as an addition to the existing 2-way PowerMacs, however for prices between $3000 and $6,000 USD. Speeds for these models will have to be 1,33 and 1.5 GHz G4s (even if they might have to hand-select the CPUs that can handle these speeds).
Such machines would enable Macs to “virtually” have a sum of 5,33 up to 12.6 GHz, and for prices up to $6,000 could beat easily Sun and SGI workstations (and even some Dell ones) – at least in price/performance ratio…
The only question would be of course to tweak MacOSX and make it even more multithreading than it is today (which might help more with responsiveness). You see, in order to take advantage of SMP, the OS and apps need to be written in a way that compliment multithreading.
The real problem might just be though third party application developers, who might need to redesign, or simply tweak their own high-end apps, or in a worse situation, they would have to learn how to write multithreading applications (that was mostly a necessity on BeOS and not on other OSes so far). Except a few developers in the BeOS community, the rest never mastered completely multithreading programming. If Apple could overcome this problem with its own devs and provide education on the matter, it would be great to see such workstations from Apple, especially after the acquisition of many companies by Apple that do high-end 3D, rendering and music software.
What do you think? Could that be a temporary solution on the absent of G5 and maybe even bring Apple to a new market and some additional cash in today’s struggling economy?
I’m afraid that simply putting more processors in a machine won’t necessarily make it any better for the “desktop” experience that is the primary use of Macs. Granted, it will help, but your performance benefit becomes practically nil after 4 processors (based off my experience with x86 processors at least), especially since most of the apps are non-multithreading aware.
So the question would be, is it easier to port everything to be multithreading aware just to change everything again when the G5 comes out, or is it easier to port to a drastically faster architecure with a plethora of existing experience (e.g. x86, probably x86-64)?
> I’m afraid that simply putting more processors in a machine won’t necessarily make it any better for the “desktop” experience that is the primary use of Macs.
I talked about the workstation market, which is a market that Mac IS after, not the desktop. The idea is to introduce Apple to a new market and bring some “easy” cash in.
> especially since most of the apps are non-multithreading aware.
Which is what I wrote. Both the OS and the high end apps (not all are needed to be) will need to be truly multithreaded.
> is it easier to port everything to be multithreading aware just to change everything again when the G5 comes out
I don’t understand. Why do you have to change ANYTRHING when the G5 comes out?? You don’t have to change a thing! Multithreading is good, no matter if you do have CPUs running at 10 GHz or if you are limited to 1.2 GHz. Multithreading is good for all apps and OSes, it does not need to be changed!
I don’t know if Apple would make the jump to x86 (though who knows on that one), let alone to incorporate multiple CPUs in their Macs. What may happen is that Apple may switch to IBM’s version of the PowerPC. Maybe the PPC 970 based on the POWER4 core??? If I’m not mistaken, Apple did use IBM’s version of the G3 for the latest iBooks.
Remember that this all goes back to the Apple-IBM-Motorola deal made WAY back in the early 1990s. So if for whatever reason Moto can’t keep up with the demand for faster, more-efficient CPUs, then IBM may take up the slack. Just a matter of waiting and seeing. You know…once the politics and red tape wiht Moto is settled. ๐
This is what happend when you don’t get enough sleep at night and don’t read through the full story. I understood mix-and-match of CPUs, not 2+ of the same CPU type. Sorry for the brain fart. ๐
“…most of the apps are non-multithreading aware.”
This is a common misunderstanding about Multiple CPU’s. You do NOT need to multithread your app to take advantage of Multiple CPU’s. The OS will schedule tasks to whatever processor is available. However, if you multithread your app, it makes the task scheduler a little “smarter” about its scheduling.
By going for the 4-8 processor workstation market Apple would be gettinginto a market segment where they have absolutly NO experience and there main selling point, ease of use, would no longer apply. They would also be up against the likes of Sun and IBM who are well established players.
True the film purchases that they have been making, such as Shake, means this could be a direction they are looking at, and it’s much more likely than there going x86 (Apple does not want to become another Be). But it looks more likely that they will try to deversify enough, products like the iPod, so they still have sources of income until they can get the IBM desktop PowerPC.
I disagree. Having the OS “understand” these issues is of course a big plus, but the application HAS to be multithreaded to take real advantage of SMP. Take Quake III for example, it has two versions, one runs well on SMP (about 15-20 frames faster) while the normal version runs only 1-2 frames more on an SMP machine! Having apps that take real advantage of SMP (instead of waiting the OS to do the job for them) can be very rewarding.
As we’ve seen with Altivec, offering better technology, be it more processors or special APIs, and expecting software developers to do the rest is not a good solution. Some Altivec-optimized code can run circles around code running on 3Ghz P4s, but almost nobody takes the time to write real Altivec optimized code. And, arguably, writing good MT code is harder than Altivec code.
Which is what I wrote. Both the OS and the high end apps (not all are needed to be) will need to be truly multithreaded.
I don’t know about quartz, carbon, or the BSD subsystem, but mach itself is quite multithreaded. One of the (few) benefits of mach is that it works well multiprocessor.
But I have a feeling (and I’d like to see some benchmarks comparing Open/GNU Darwin with linux) that OSX would fall flat on its face performance-wise on x86, what with all the context switching. Opteron will fix that. Itanic fixes that. But all in all, x86 is dated in its 32 bit form.
>And, arguably, writing good MT code is harder than Altivec code.
Yes, this is true. Be had real trouble getting most of its third party devs to even understand what MT is…
so there won’t be more than 2 CPUs in any case.
You have to create a chipset that is capable of running 4 or 8 CPU first.
That’s before tweaking the OSX to run on 4/8 CPU’s. That’s before tweaking apps to run on 4/8 CPU’s.
how about this idea – multithreading makes things even better when the G5 come because the OS is then ready to make use of multiple G5 processors. delivering better performance than it would if it was still stuck on a limit of 2 cpu`s.
even if old apps do not benefit. new ones will IF they take advantage. An example of this is when the PPC was used as an add-on for the amiga market … applications came along that took advantage, or had plug-ins that did. alas theres many things wrong with that example but the point is there were benefits and apple is in a much better position to make it work than amiga was .. because it was passing through multiple owners and didnt have a consistant vision at the time.
I think very multi cpu macs would be very nice, if not just from the wow factor, and the names could be fun, the “quad 4 G4” (that would be 16 though) “G4-4” “4G4” or “OctoMacOpus”. Going multi processor good be better then faster cpu’s if heat is your issues and you can get very good multithreaded apps, and the OS to handle it all. Maybe with freebsd 5.0 SMP this could work well. The flip side to this is in many ways it’s a bad patch to apples problems. They need more power under the hood, and throwing a army of incompatant soldier CPU’s at it when they need 1 super fighter CPU is not the best solution.
On a differant note, i think it would be nice if they introduced a Powerbook using 2 low power G4’s or G3 chips. This could give them more bang, longer batter life (maybe), much cooler temps, and one heck of a wow factor. The advertising depot would go nuts.
Now if only OSX would run good on my universities G3 macs so they would put it on them i would type this from them 10 feet (3 meters) away instead of this Dell.
The problem with current g4s is memory interface. Splitting a 133 mhz bus for more than 2 cpus will not add much performance. This will change with the IBM “power4 lite” chip. when it is introduced I would not be surprised if Apple created some 4 way render nodes in something like a X serve box. The mac faithfull will have to wait for september…
… too much trouble with the heat (the current dual G4 are noisy enough) and the redesign on the whole software to take advantage of those systems.
My guess:
* 2nd quarter 2003(or earlier):
speedbumps to 2×1.5GHz or some more (1.6GHz?)
and maybe with DDR CPU bus
* 2nd half 2003 (maybe last quarter)
new 64Bit Powermacs based on IBM’s new desktop PPC
I am shure Apple can live with that roadmap.
I don’t see the extreme need to hurry with speedbumps.
The curren dual G4 run X beautifully. I own a dual 1GHz with a GeForce4Ti 4400 128 MB and Quartz Exreme runs snappy on that configuration.
Ralf.
Eugenia: Thanks for beeing not that Apple negative this time ๐
The real problem is that at least for me all those dual-G4s looks like “we haven’t G5, we haven’t PPC970, let’s throw those dual-something here and hope Intel and AMD stop those crazy speed race”.
What Apple desktop machines needs is PPC970. All the rest are bad substitutes.
Expanding beyond the 2xG4 design currently being used would require a fair bit of engineering resources. Also, I would be curious how well the G4 even scales upward with respect to CPU count. It should also be noted that you do not receive a linear gain as CPUs are added, so you receive diminishing returns, not an absolute fall off. What gain do you receive for that 3rd CPU? For the 4th?
While it is also true that you do not need to multithread applications to take advantage of multiple CPUs, there is an issue of granularity. If you do not have multi-threaded applications, the processor sharing is handled at a per-process granularity. This really helps when there are alot of active processes, however that is not the case on the typical system. We have alot of process which are sleeping while waiting for events, etc. Even in the workstation market, looking at everyone’s favorite, Photoshop, we are mostly concerned with the performance of a single application. If the application itself is not threaded, the extra CPU’s provide little gain.
This leads to a conclusion that multiple CPUs are most effective when using well threaded applications which are very demanding from a CPU perspective or when a system is running a number of services/applications with high CPU demands.
Then there is the issue that when an application is highly threaded, there is additional overhead involved, which will actually result in higher resource demands and decreased performance on single processor systems. If a program has two threads and there are 8 processors, only those 2 threads can be executed at once, provided that one is not waiting on the other, due to interdependencies. However, if there are 8 threads and only 1 processor, only one of these threads can be operated on at a time, which means there are 7 other threads waiting to be scheduled. This results in alot more context switching and slows down performance.
Alright, that is enough rambling…..
We all know that Apple has gotten itself into a bit of a bind on the CPU issue. We also know that they are making steps to resolve that with IBM and the PPC970 processor. If this situation is not resolved by the end of CY’ 03, there will be very drastic repercussions and could result in Apple going x86 as a last ditch effort to save the company. I truly believe that Apple will not move to x86 for anything less than that. It does not make good business sense to transition to x86, period. This is more about a business decision than a technical decision.
Until Q3 CY’03, we will have to live with what we have. I’m sure Apple will make improvements in it’s HW & SW offerings, giving us plenty of time to save up for that sparkling new system later this year or early next year. It won’t be a small step forward such as the x86 folks are used to, little steps forward in mhz at each iteration, but it will be a huge leap forward, that is the joy of delaying between upgrades, a much more tangible difference.
I’ll also take this opportunity to state that I really don’t understand why so many people insist that OS X is slow. I see it more as being smooth and controlled as it operates. Other OS’s are more about snapping desktop objects around. It’s just a different feel.
Finally, it was benched that Apple needs to decide what it is, HW, SW, Retailer, etc. I think they know very much what they are. They are a digital lifestyle company. They provide the foundation for it in the Desktops, allow us to maintain it on the road with the laptops, attach all of our other digital devices and integrate it all together. This means providing the means to utilize these digital devices in our daily living, like the snowboard jacket for the iPod. Did it take alot of resources to do that? No, neither was it costly, it was just something extra, the details if you will.
I’ve been a Mac user for 2 months now. Being a Mac user to me is about the whole package. It’s not about just the processor, or just OS X, or just iApps, it’s about all of it rolled together. It’s like with the wood grain in a BMW, they layer it and put special laminates and aluminum plating in there so if there is a wreck, it doesn’t splinter everywhere and injure a passenger. Other companies don’t have this same attention to detail, even if they are faster, etc. It’s all about the details and the experience as a whole.
– Kelson
> Eugenia: Thanks for beeing not that Apple negative this time ๐
http://news.com.com/2100-1040-980752.html?tag=fd_top
It won’t work for the simple reason that historically mutli-cpu machines have not, in any large numbers, made it to the workstation market. Sun has MP capability (as demonstrated by it servers), but doesn’t bring this to the desktop. Their “premiere” workstation is a 1Ghz SPARC.
Another issue is that most folks “single task” on their machines. They do one major thing, do it hard and heavy, and move on to the next. MP machines don’t necessarily excel in this kind of environment.
The reason behind that is that MP machines have more of what I consider “torque”. What they may give up in raw speed they make up in overall load capacity. This is where they excel in the server market as a shared resource.
So, I don’t think that wide spread MP machines would really help Apple penetrate the general purpose workstation market for those judging their machine solely by blind performance.
>> Eugenia: Thanks for beeing not that Apple negative this time ๐
>http://new s.com.com/2100-1040-980752.html?tag=fd_top
Hey Eugenia – don’t get off topic here! *lol*
It won’t work for the simple reason that historically mutli-cpu machines have not, in any large numbers, made it to the workstation market. Sun has MP capability (as demonstrated by it servers), but doesn’t bring this to the desktop. Their “premiere” workstation is a 1Ghz SPARC.
Their “premiere” workstation is the Blade 2000, which in its top configuration sports dual 1.05GHz UltraSPARC III processors.
And slightly off topic, and for those of you who care, I’ve continued looking into what caused my OS X crashes.
OS X contains a set of tools to ensure that all applications run prebound. For example, whenever you install anything the installer will sit around for several minutes while it’s “Optimizing system performance”. During this time it’s prebinding the executables it just installed.
One of these tools is “fix_prebinding,” a daemon which waits for messages from the mach_kernel about applications which aren’t prebound. If an application that isn’t prebound is executed, its execution is suspended while fix_prebinding prebinds it.
Well, this would be all well and good except fix_prebinding seems to be rather buggy. According to CrashReporter I’ve experienced 92 fix_prebinding crashes (that’s since I’ve installed the beginning of this month, so a little less than two weeks)
The aftermath of fix_prebinding crashes seems to be anything from an application running without prebinding to an entire system crash resulting in the “beach ball of death”.
So, I’m going to look into if fix_prebinding can be disabled. I’d rather have stability than have my applications start a half second faster.
No existing OS or hardware combination will automatically assign tasks to multiple CPUs (with the single exception of code using OpenMP directives compiled with an OpenMP compiler). Code will run on seperate CPUs only if they are in seperate threads. Almost all OSs will run seperate processes on seperate CPUs, of course, but that is not nearly as useful (except maybe for responsiveness under heavy load). If you’re running a compute bound program, like a 3D or video render, multiple CPUs will have zero speedup unless the algorithm is multithreaded.
Exactly. Thanks Rayiner.
Back in the mid to late 90s did not a company make an Apple that had upto 4 604 CPUs. So that one would be a good marker on how these 4 CPU machines would work. I know the price back than was around 10K to 15K.
My thought is that they will be going to the IBM chip. But from my understanding is that those chips won’t be produced until late summer or early fall. So Apple has to tap dance until then.
Rayiner, OpenVMS and Tru64 Unix have been scheduling tasks and processes across multiple CPU for many years now (all the way back to VAXs). You can also use the MP libraies and the thread code to multi-thread a single process, even VAXELN did that (thou that does not exist any more).
The big problem with those whole idea is the Apple G4 architecture just plain isn’t designed to scale. The CPU bus runs at a 1999-esque 1.3 GB/sec. In comparison, when the K8 and the next-iteration of the P4 come out (in Q2) x86-land will top 6.4 GB/sec. The current G4 bus can’t stream data to a single G4 (the AltiVec unit on a 1.25 GHz G4 needs 20GB/sec to fully saturate it) and would just roll over and die if it was shared by 4 or 8 processors! And since Motorola isn’t exactly aiming the G4 at high-end markets, it’s not as if they’ll be designing a HyperTransport level Point-to-Point G4 bus anytime soon!
I find it hard to believe they spread single threads across multiple CPUs. How would protection work in that case?
A QG4 with *correctly written* apps would plow through Photoshop, FCP, and expecially Shake like a fire axe through spam. (It would also run AG Blast at mad speeds.)
But, by the time a new chipset and bus were designed, the new 64 bit IBM chips will be here.
You can also use the MP libraies and the thread code to multi-thread a single process
Uhh, ok, care to quantify what the “MP libraries” and “thread code” are?
If you want to write a multithreaded application on OS X, I’d say the easiest approach is… making pthread_create() calls within your application, and linking against libpthread…
…but that’s just me, you seem to have your own ideas…
“But, by the time a new chipset and bus were designed, the new 64 bit IBM chips will be here.”
That’s probably the best point I’ve seen made in the comments so far
No a thread runs on a single CPU, but are you saying a task=a thread, because a process can have multiple tasks pending (I/O, ASTs, timer events etc) which if run unblocked will run an any scheduleable (sp?) CPU while the parent process runs the current thread. Maybe I don’t understand the use of the term task being used in this context
a process can have multiple tasks pending (I/O, ASTs, timer events etc) which if run unblocked will run an any scheduleable (sp?) CPU while the parent process runs the current thread.
It sounds as if you’re describing asynchronous system calls. As far as I know OS X does not have any of these. In fact, the only operating systems I know of with support for asynchronous I/O operations are Solaris, Iris, and Windows…
(implementations are forthcoming in Linux and FreeBSD)
In fact, the only operating systems I know of with support for asynchronous I/O operations are Solaris, Iris, and Windows…
s/Iris/Irix/
how the mac would run with every app truly optimized for alitvec and an even more mature osx.
I was speaking of OpenVMS on Tru64 not OS X. And yes that would be asynchronous system calls, which we define as tasks when running unblocked by the parent process. I think we are using the term task differently.
OpenVMS has supported asynchronous I/O since V1.0 back in 1976, Tru64 picked up full support in V4.0.
Apple’s made it pretty clear that if you have something that can really tax MP software, they’d like you to invest in a few XServes, use the Gigabit ethernet, and distribute the load. They’re also busy buying up makers of high end rendering software to make sure that the software can play nice with this vision.
Right now there are two things preventing >2 processors in Apple hardware: the kernel hasn’t been made to work with >2, and their motherboard designs all have a single memory bus for all the processors. I have no idea how easy the first would be to fix, but my guess is that it’s not trivial, and i haven’t seen any indication that Darwinites are hard at work on this.
The second is a serious problem – every indication is that the current processors are memory starved, and adding more processors to the same memory bus will only exacerbate the situation. If they want to avoid this, they’re going to have to make a brand new motherboard and memory controller, also not trivial.
My guess is that, even though both will need to be done in the long term, in these tough economic times they probably just don’t have the engineering resources to throw at these issues. I expect that the hardware engineers are busy trying to get a memory bus in place for the IBM970, while the software folks are busy profiling and Altivec-izing as much speed into the OS as possible.
And yes, Quad-604 Macs once existed, and MkLinux could work on them. Unfortunately, the hardware that supported them bears no relationship to the hardware that supports current DP’s, so there’s nothing there to help making Quad-G4s.
Cheers,
Dr. Jay
The good thing about Mac OS X is that the APIs (at least Cocoa) are fully multi-threaded, so you end up having a fully multi-threaded GUI (now hw accelerated) with all classes (if you choose to take advantage of them) producing fully multi-threaded instances. All you have to do is make sure that your procedural code is MT.
It simply isn’t realistic to create a 4 processor Mac in a short timeframe. Bus wise, the G4 is fairly similar to a Pentium 3. Dual processor machines are created by putting the 2 processors on a shared bus. This works fairly well for 2 processors, but bus contention reduces scalability above this point. It also hurts that the current G4 is currently starved for bus/memory bandwidth. Any fixes to this problem would require the creation of a whole new chipset (and a beast of a chipset at that).
If they could fix the memory bandwidth issue (switch to a point-to-point model like Alpha/Athlons do), this would solve a lot of their issues. Besides performance, SMP would probably also allow Apple to make cheaper Macs. CPU prices go up a lot as they get faster (a 20% faster chip can be twice the price), so Apple could sell dual- or quad- slower G4s at the same or lower price than faster single G4s.
More and more Mac software is MT’ed these days; I think this would really make for faster Macs, especially in Mac speed-sensitive stuff.
I heard something to this effect before, that Apple had indeed prototyped a 4 processor G4, but had decided there was insufficient market for it. But this was back before they acquired Shake and other high-end video apps. Who knows? Maybe they’ll give it another shot.
Eugenia,
Read the WHOLE message. I said:
“However, if you multithread your app, it makes the task scheduler a little “smarter” about its scheduling.”
What part do you disagree with? It sounds to me like we are saying the same thing.
you dont know apple you say the g5 isnt in site, just like saying this last macworld would be nothing . apple has good security and its highly unlikely any one will know or that their will be any leeks till jobs want it
CPU speed is actually more than enough. What is needed is to speed up the rest of the components. Specially the HD. Although we have a 9999Tb CPU with a 100 MB ATA IDE the OS would continue taking minutes to boot and applications will continue taking seconds to be launched.
P.S.: I got a 733 MHz G4 PowerMac and it work clearly better than my 800 Mhz Athlon with Windows 2000. Both have similar hardware, I suppose than the difference is in a better software.
Completely and totally off-topic, mod me down Euginia if you wish, but what on _earth_ is “s/Iris/Irix/”? Some kind of regular expression? I’m not getting it…
Yes, some kind of online abreviation. It means “in the above context, change the word ‘Iris’ with the word ‘Irix'”.
s/iris/irix is the syntax sed (Unix, Stream EDitor) uses. The general form is s/A/B/[g], where A and B are regular expressions and g means “repeat in this line”. Now you know … go impress your freinds.
There are only a few cases you can apply AltiVec. As Intel found out with the early P4s (which weren’t clocked high enough to mitigate the performance issues with its long pipeline) most software just isn’t amenable to fancy stuff like SSE2 and AltiVec. Most software (desktop apps in particular) is full of integer math, can’t be vectorized, can’t be parallelized, and spends a lot of time doing boring stuff like branches, memory access, pointer manipulation, etc. For that kind of thing the only thing that helps is sheer clock-speed fed by sufficient memory bandwidth. Thus, the whole point of the P4 is to maximize these two items.
“CPU speed is more than enough.”
I remember John C. Dvrok debunking this when the Pentiums came out by quoting people who said the same thing when the 286 came out. The only sure thing about history is that it is sure to repeat?
I was working with a 3D modeler the other day, and my computer (a 2GHz P4) felt loaded at about 100K polygons per frame, which was just enough to do a simple scene with some cars and backgrounds. My CPU could be 10x as fast, and it still wouldn’t be fast enough.
I was looking into the techniques used to do realtime-shadows in games like Doom III. Even GeForce 5 FXs with their 500MHz 8 pipeline architectures running in tandem with 3GHz P4s still have to resort to fairly hackish methods to do shadows, and even then, the algorithms still aren’t fully general and don’t look as good as real shadows.
I downloaded a “real-time” raytracer demo the other day. The engine had realistic lighting, soft shadows, reflections, and anti-aliasing, but *very* limited geometry (think pre Voodoo-I software rendering). It ran at a blistering 0.6 fps at 640×480.
My (hand built and well-tweek, btw) X-ray diffraction simulation takes nearly a minute to generate a 800×600 gray-scale image. And that is with simple crystals (100 atoms or so). If it could do this about 20 times faster, I could visualize what happens as the crystal is rotated/deformed.
It takes Mathematica several minutes to churn through my Linear Algebra assignments (and we’re talking first year, intro level assignments at that!)
Compiling KDE takes a good 6-8 hours.
Thus, unless all you do is web and email (in which case, do yourself a favor and buy a WebTV or something) you definately do *not* have enough power.
In the readme file for the December release of Apple’s Developer Tools it is mentioned that Project Builder support parellel builds. According to this documentation, one build for each processor. The odd thing is that Project Builder currently supports up to three simultaneous builds.
Now the documentation states that 2 parellel builds works fine for most multiprocessor systems. However, it does not bother to explain which systems would benefit from 3 parallel builds.
I am not a system designer, but I believe there might be some serious obstacles into making a three-processor Mac – with scability and resource sharing. Can anyone explain under what circumstances one would configure Project Builder to do 3 parellel builds?
Or… just perhaps we’ll see a G4 “cubed” ;->
There is only one story on OS News any more!
Let it drop. If you feel the urge to publish a story on Apple, and the story is anything but actual, confirmed facts, don’t do it.
We can talk about what architectures Apple is going to switch to forever. In the two years since OS News has existed, Apple has done a lot, but they haven’t changed architectures. Let’s wait until Apple actually releases a press release to speculate, okay? Even the Mac rumor sites aren’t this bad.
There is only one story on OS News any more!
Let it drop. If you feel the urge to publish a story on Apple, and the story is anything but actual, confirmed facts, don’t do it.
That would work if OSNews were only about News… it’s also about dicussion of OSs and the hardware they run on (at least that’s the idea I’ve gotten from the articles I’ve read here). You really shouldn’t trash this thread, it’s the first Apple thread in a while with a lot of positive and intelligent discussion.
Now to add something that’s on-topic, I think Apple will go with IBM’s new 970 chip. I think it will give them the performance boost they need to play catch up… at least in the desktop market. I don’t know how viable those chips are for portables.
..is that until Apple can convince volume buyers that it’s entire hardware package is up-to-date and has a future, organizations won’t buy 4-6 proc machines from Apple.
Argue CPU speeds day in and day out, but Apple hardware is behind the curve, even if their machines satisfy most of the purchasers.
I would love to switch to Apple, but at this point I can wait 1-1.5 years to see if they’ll figure their things out. For now, I’m just going to get a 2.4GHz replacement box (keep all my other hardware, like my very expensive 19″ monitor) for $700 and keep trucking.
For people like me, that’s what Apple is competing with. They’re not necessarily just competing with another whole machine from Dell.
While Apple waits for IBM to deliver the Power4 “lite” version for the desktop, can’t it use the Power4 server chips which IBM currenyly uses in their servers ? After all they conform to the PPC specs.
Surely that chip would bury the current G4.
Sorry if this is a dumb question, but I’m not a hardware engineer. Can someone pls tell me why this is not possible / won’t work.
>There is only one story on OS News any more!
Heh… I always have a good laugh on people who don’t want anyone to editorialize on Apple. It is like saying “buy their products and shut up”. Sorry sweetheart, but this is not what I do over here.
Even with the boost of going to the IBM 970 (how much is that anyway?) does this chipset have support for multiprocessing? How much bandwidth is available compared to the present chipset?
“Even with the boost of going to the IBM 970 (how much is that anyway?) does this chipset have support for multiprocessing? How much bandwidth is available compared to the present chipset?”
I believe in IBM’s servers, they are generally used in huge configurations (like 100s to 1000s of processors)… so yeah, they definitely support multiprocessor configurations.
I was under the impression that the “lite” version of the Power4 was going to be used in low to mid-range Linux servers (though I could be wrong).
These are a few things I found with a quick search on google:
http://212.100.234.54/content/3/27621.html
http://www.creativepro.com/story/feature/18074.html
Here’s a quick summary of specs:
* 64-bit core
* 1.4 – 1.8GHz speeds to start with
* 900MHz FSB!!
As for laptops, sounds like the 970 has reduced frequency step down capabilities so it can reduce heat and power consumption if needed.
i don’t really have a problem with apple’s system speeds.
i’d buy one right now…but they are too expensive.
example: I just bought an athlon xp1800 & mobo for $110 from newegg. with a few more parts i rounded out my tower for $400.
now a lot of people here blathering on about apple’s cpu speeds completely forget that it’s the price stupid.
you guys could say the exact same thing about my little 1800 that you do about apple. “your 1800 is behind the curve…it’s slow…it gets it’s ass whomped by a 3 gigahertz p4” blah blah blah blah.
bottom line: i choose hardware that is inexpensive and does the job. a SHITLOAD OF PEOPLE DO.
i’d buy a mac in a heartbeat, because i’d like to take advantage of some of the apps, and play around a bit in os x.
but i can’t build/buy a new g4 mini-tower for $500 or even $600.
“now a lot of people here blathering on about apple’s cpu speeds completely forget that it’s the price stupid.”
Well sure, that’s very important… the key is that as the 970’s trickle down through the models (assuming that Apple uses them at all of course, this is still just a rumor officially) the prices of the high end Powermacs (and hopefully the powerbooks as well) will stay the same but they will be closer in speed to their PC counterparts. So yeah, they will still be expensive, but at least then the expense is a little more justified.
Firstly a bit of trivia: s/1/2/ is also vi syntax, and is a very fast method of doing search and replace. The expressions between the slashes are regular expressions.
Secondly, I have to admit I don’t buy the argument that Apple are a “digital lifestyle company”. Maybe that’s what they want to be, maybe that’s what their marketing dept try and tell people, but essentially they sell hardware. Hardware or software aren’t like clothes, a wooly jumper from 4 years ago is probably just as good as one from 4 weeks ago objectively, so they use fashion to push things along, which is fine. Hardware though degrades before your very eyes in a stunningly visible way. After a few years, your machine might as well be junk status.
So, unless Apple can do a big leap and pull ahead of x86 once more, they’re going to find the Mac a tougher and tougher sell. At the moment the “it’s the whole experience, you don’t need a fast machine” line is just about holding on, but for how much longer?
On multi-CPUs: As has already been pointed out, >1 CPU doesn’t have much effect on desktop systems because 90% of the time you’re only interacting with one application and the rest are waiting around. The “speed” of a system tends to be influenced by how fast it can respond when you need it – when you start an app, flick between desktops rapidly, when rendering a web page and so on. So, speed most definately matters which is why you can so easily tell the difference between a 3Ghz and a 800Mhz chip, even though most of the time they’ll be idle.
I think a few people are being misled slightly by the fact that MacOS is microkernel based, so different servers can be pushed onto different CPUs. That doesn’t make a huge difference unfortunately, and might even hurt performance, because the overhead of coordinating communication between the two CPUs is quite high relatively (simply because they are not on the same chip die, hence the “hyper-threading” stuff that’s arrived lately). SMP kernels use something called CPU affinity, which attempts to group processes that communicate with each other a lot in the same processor. Unfortunately because system services are in userland processes in a microkernel, virtually all apps need access to them, so it becomes hard to make good use of affinity.
Plus of course 2 CPUs is twice as expensive as 1 CPU.
I’m wondering at what point (if ever) the marketing sheen that Apple has given themselves will wear off, and people will realise that far from that latest, slickest thing they are using the equivalent of a vintage automobile, cute, great in its day, but not all that practical. At the end of the day integration with an MP3 player is nice, but can the details stand up to the competition? In particular, as Linux gets better the attraction of cheap and fast hardware with plenty of free software might prove pretty good.
Nice idea but I doubt that anyone would buy it. Apple over charge for their hardware IMHO. As proof that SMP is not the answer take a look at this.
<A HREF =’http://www.robgalbraith.com/diginews/2003-01/2003_01_07_macpc.html‘>http://www.robgalbraith.com/diginews/2003-01/2003_01_07_macpc.html&…
Check the prices:)
This isn’t anything like the sort of data throughput that workstations are required to handle, and a P4 still knocks the big Apple beastie for six. The whole architecture needs to be seriously overhauled.
Machines using x86 would be cheaper than 8-way machines. Esp. since most apps aren’t all that multithreaded (people think programming an multithread app is easy… I wonder why..)
Besides, they also need to tweak OS X more to make it more faster and responsive. One way is to slowly get rid of resource hogging useless eye candy.
people think programming an multithread app is easy… I wonder why..
I read through all the comments and some sound like the person was under the impression that either the compiler or the OS would automagically multithread an app whether or not it was written in that way. I might be over exaggerating a bit, but yeah you’re right, it’s not easy. Most small and simple apps don’t really need MT, and larger programs that do or should use MT probably already do. So where’s this claim that “more apps need to be multithreading aware” coming from?
If we can figure out a way to multithread a HelloWorld program, then we’ll now for sure it’s possible and probable to multithread all programs.
The claim that more programs need to be MT aware comes from the fact that a lot of programs that need multithreading (Mozilla, most KDE apps, etc) aren’t currently multithreaded. I don’t know how that situation is on OS-X, but I wouldn’t be surprised if it was similar.
I think a fairly good estimate of the power improvement when/if the processor switch happens is about a 3x improvement. Why? Well SPEC isn’t the end all benchmark but darn close. The 1ghz G4 scores a 306 (i can not for the life of me remember if it was int or fp though, i can did it up if anybody likes) which is about eqivalent to a 1ghz p3. Now if everything scales properly the 1.25ghz g4 would get about a 380.
Now didn’t IBM issue estimates that the 970 would SPEC in at about 1100? That would give us about a 2.8x performance increase, or 3x if we’d like to round.
I think that will personally kick ass and multi processor machines may become standard simply because of all this multimedia stuff apple has bought up. Also, multi processor systems would also benefit the desktop because things would simply be more responsive, i may only do one thing at a time but most people have stuff running in the background. I could be ripping some mp3’s and listening to them with one processor while the other could be handling my browser thread so everything stays nice and snappy plus we don’t have to pay a penalty for all the thread switching.
The state of multithreading on all platforms is appalling. The OS X Finder, of all programs, needs serious effort applied to it. It’s getting better, and so is Windows, but neither of them is there yet. The amount of 3rd party software that doesn’t do it properly is frightening.
You could almost imagine that multithreading was new and unheard of. The reality is, developers are just lazy. I’m a Java programmer, and yes multithreading takes more work, and yes it has an overhead, but it has to be done?
Why? So that the entire app doesn’t wait for one operation. Like reading from a disc, or waiting for the machine at the other end of a wire to respond, or whatever. If you’ve ever had a program hang for a few seconds, and not respond to any input because it was busy doing something else, then that’s because it’s not properly multi-threaded. It should respond to you at the same time as whatever else it’s doing.
This is true whether you have one CPU or many. The only difference is, the wasted CPU capacity is more visible on a 2+ CPU system, because you can see an entire chip being wasted. But don’t imagine that having just the one CPU means multithreading isn’t important.
So if Apple can persuade some of the developers out there to write programs properly, that’s a good thing.
1st 970 isn’t a shipping processor. It is in the design stage, being basically a “POWER4-Lite”. The processor is targeted at the Desktop market. (as opposed to the high-end workstation market).
The POWER4 *is* shipping, is horrendously expensive, and yes, does exist in installations smaller than 100’s of processors. They are HUGE, power hungry, hot chips. There’s only one IBM POWER4 workstation machine currently, for a single processor model prices start at around $12,000 (That’s for a single processor server, but it’s the same form factor)
http://arstechnica.infopop.net/OpenTopic/page?q=Y&a=tpc&s=50009562&…
Check out this rather lengthy thread at ARSTechnica details what we’ve been able to piece together about the 970.
The one very interesting thing I’ve read about the 970 is the possibility of a RapidIO memory system. With such a system scaling to higher numbers of processors would be MUCH easier than the current motherboard design/memory sub-system. So… I could see quad processor workstations based on the 970, and they’re going to be expensive, not a target for current PowerMac users, but for high-end users who would switch from a SunBlade, SGI Fuel, or RS/6000. I’d see the single – dual processor 970 machine targeted at the current PowerMac users.
I dont think it will be mechanically possible to put more than 2 CPUs in the case. You have heat issues as well as mechanics. You will have to find a way to disperse all the heat and the G4 is already a hot little sucker. The second reason is Where to put them they will have to significantly increase the size of the G4 case. I see more of a chance for Dual Processor iMacs than I do for 4 PU machines, maybe a dual processor Ti. As for the G5 it is a pipe dream it is not coming. Apple may come out with a 1.3 ghz machine and call it the G5 but the PowerPC chip is done.
The 970 will solve all of our and apples problems if they can get it to market fast enough. Come on and look -64 bit and backward 32, can access terras of ddr ram not just 2 gigs,900 bus not a 133 or 167 but 900. 2 instructions per clock , faster clocking and with all that ability you wont need a wind tunnel. Just maybe ill be able to play MOH or Doom3 with not having to turn down anything and listen to itunes while iam doing it! Powermacs have hurt the whole line by keeping everything behind them. Motorola was just not that interested i guess. Lets just hope it wont take a year for this to happen because if it does maybe they should look at Amd or INTEL. Not to harp but i love my Quicksilver and its software but the hardware seems so behind when looking at the wintel stuff. If Apple had this out now and in their imacs just imagine where their market share would be! They couldnt make them fast enough. I wish Steve Jobs was a hardcore gamer because then he would see! Sory for the long wind but i hate seeing Mac playing catch up!
I can’t believe that Apple’s engineers developing actual kernel to menage 2 cpus didn’t thought to menage more than 2 cpus.
The job load is so close….
About the hardware: memory access bottle neck can be a problem, it is right, but 1 or 2 mb cache can fix at least 70% of the problem.
So if 1+1 means 1.8 cpus than 1+1+1+1 can be about 3.2 cpus.
So 4 1ghz cpu can create mp3 or render a Maya3D scene with about 3.2Ghz virtual cpu….. it sounds like:we have the same cpu power.
(don’t forget that Altivec can boost performance up too, and don’t forget that Apple’s stuffs have a soul ๐ )
Ok, software needs to be developed thinking to use more than 1 cpu to get advantages of multiprocessing but anyone can agree that:
1) 800mhz G4 is enough to type text on an unoptimized word processor
2) Quite all cpu-burner’s osx software today supports multithread
So you can get the power where you need with more than 2 cpu and fight the x86 hardware.
Costs problems? I can’t believe that.
Motorola signed for G5 but didn’t made it…. why apple doubled his PowerMac’s cpu on august without rise up prices? I believe they didn’t paid the second cpu, maybe while time run fast and cpu’s mhz walk slowly Apple can get more and more cpu at the same price.
It sounds like: “build G5 for me…. if you will not do it than i will got my virtual G5 for the same price”
See ya
There is a single strong reason, why Apple won’t switch to x86. Developers would abandon Apple, if they had to recompile all of their apps for new CPU architecture. Remember, they have just gone through carbonizing their apps for OS X and this is second major change. The first was move from 68K to PowerPC CPU, which required a lot of work. And there’s endian-issue, big-endian and little-endian. This is the difference between x86 and PowerPC (and one of the main obstacled why Apple didn’t select Windows NT over NeXTstep, Solaris or BeOS).
IBM has been innovating, developing SOI, copper chips and Motorola is just sitting on it’s butt. Remember, Motorola designs it’s CPU’s for communications, they are used in network devices like Cicsco routers and Apple is the only desktop customer.
I agree with Mac man 100 per cent .So that gives them motorola or ibm, motorola just aint getting it done thats why we now have 2 cpu’s in the powermacs to try to make up for it which it only does in those few programs that know how to use it. Most of us know this is a bandaid fix at best. That then gives us the 970! Now imagine if moto was getting ready to release a top secret chip that really blew the doors off the g4. I just dont think that will happen. maybe a slight g4 bump with the 7457 but nothing to make up for the huge gap with the Intel’s and like so we are back to the 970 which could close this gap but again how long will this take? What will it cost? Also how long for it to filter down the whole powermac line? Good thing they have the Verybest software and the ( Prettiest Computers )! Maybe iam totally wrong and multi cpu’s is the answer but i just dont see it.
Simone Giuliani said:
> don’t forget that Altivec can boost performance up too,
> and don’t forget that Apple’s stuffs have a soul ๐
Actually, a soul can have detrimental effects on performance, as we’ve seen. Calculating all those conscience calls and regrets, takes up CPU time. That’s why all the most successful pieces of wetware out there are the ones that don’t bother with one.