eWEEK is reporting that Apple is nearing the release of a new Power Mac known internally as Q37, which will include the new chip. However, Q37 will ship before Apple rolls out a 64-bit version of Panther in September. Instead, the new Power Macs will ship with a special Jaguar build train code-named Smeagol. Smeagol will run on the new chip but won’t take advantage of many of its key features, limiting initial performance gains.
If there is a new processor then it must show attractive gains off the bat for people to invest in it. It need not, however, take advantage of every single nuance of the chip.
On top of the OS being 64-bit there needs to be 64-bit apps to run on.
Regardless, I doubt the lack of a 64-bit OS will hamper sales or will steal the thunder of the 970 PowerMacs. If anything the systems will be fast and will just get a bit faster when Panther ships.
It is time for Apple to pump up their hardware, and it seems the WWDC is going to be the place for the announcement, the real question will it be worth picking up the nw line before Panther.
Mmmmm mmm? mmmm…. mmmm! )
>> On top of the OS being 64-bit there needs to be 64-bit apps to run on.
Wrong. The whole point of the 970 is that it can natively run both 32 and 64-bit applications _without_ any modification. Eventually, every software vendor will want to port their appliction to 64-bit though.
Some people over at AI in suggesting that if it ain’t at full speed, wait… But these machines should be released ASAP for the developers!
Developers, developers, developers!
Developers aren’t confused why things are getting bogged down in the pipeline… as long as they have a beta/test version of Panther with whatever updates available to 3.3 and a functional 3.1/Jaguar release, things should be cool.
As for consumers: “the real question will it be worth picking up the nw line before Panther.”
Well, I would say this rests on your overwhelming desire and your tendency to (or not) b!tch about upgrades. i.e. If you wait, you can buy new hardware with Panther pre-installed and FOR FREE… Otherwise, you may get pinched between July/August and September.
I imagine availability will be limited initially–I hope the early release will come in part and in conjunction with offering WWDC attendees an initial, preferential treatment.
Hopefully, this will lead to a rapid migration of OS X and other applications to take advantage of the 970 to its fullest.
First, Apple will likely be giving a “Free Panther” coupon to everyone who orders a 970 loaded with Smeagol. Count on it.
Second, I would rather wait anyway until Panther comes installed on it just so I wouldn’t have a new machine that had already been through an OS upgrade.
AC, there are advantages to having 64-bit apps on 64-bit hardware. I am well aware that Apple’s roadmap is compatibility but for an app like Photoshop or Oracle9i to run on a 64-bit platform is a huge benefit not just in the amount of memory that can addressed but in performance.
Panther will really need 64-bit apps to show what it can do. Compatibility is expected but we need performance and moving apps to 64-bit is one of the ways to get there.
Oracle, yes.
Adobe products, NO WAY.
What percentage of an already small market are going to be 970/64-bit based?
Do you think Adobe is going to realize a great enough advantage that it’s economical to go through a great deal of work in order to support two different systems (32 bit and 64 bit)?
I don’t. I think you’ll see the few server-side apps that have recently been moved to X be ported to 64-bit. I give it two years or more before high- to mid-end apps (like Photoshop, Maya, etc…) begin the migration. (Depending on the uptake and mix of 970s in the product lineup–i.e. if Apple is 100% 970 in two years, this could move quicker, but I think a G4, or G3w/Altivec, will have some legs.)
This gets so old… do people even do their homework anymore?
C’mon, man! I find it so annoying that so many ignorant posters come through these news sight forums and blurt out a bunch of nonsense about this that and the other thing.
The comment you made reminds me of when I am at the bar, having a conversation with a friend about something she & I are familiar with, then
some uninformed person sitting nearby (you) suddenly breaks into the conversation, making a dumbass of themselves by talking loads of BS.
If you don’t know/understand something, please ask with a question instead of annoying those of us who have done their research and have something constructive to add to the conversation. Please, don’t throw a bunch of false information into the laps of those of us who have already taken the time to figure-out what the hell is going on and how to best take advantage of it.
If you are trolling, go away, we’ll tend to ignore your posts.
I switch at the perfect time I guess.
The apps will come, you can be sure about this . How long have we talked about 64-bit goodness; developers have been preparing/planing for this since we first heard about the move. I know I and a couple of my code monkey friends have been brainstorming on a new redesign UI better icons, cutting out the fat, etc. I’m glad that I decided to code for OS X its been one of the best decisions that I’ve made in a long time. (still programming for windows, making money on both ends :p)
I really do…
Well Anonymous at least gave me an informed response. I don’t know why Apple would not exploit a processor advantage to their benefit like they do with Altivec and also encourage developers to do so as well. You would be surprised at how much memory Photoshop can chew up, pretty much everything you throw at it. I don’t know of any graphic designer that wants their file swapped to the drive
The Mac Genius Tem spells “site” by writing “sight” and doesn’t even tell me why I am wrong or what he thinks besides just calling me a dumbass. Yet has soooooooooo much to contribute.
Well Tem, want to contribute to the conversation or do you want to keep caling me dumbass since you are apparently smarter and better informed on this topic than anyone.
Read my posts again, their was nothing condescending or trollish. Apple has a huge advantage in delivering 64-bit power to the desktop.
Intel has Itanium and AMD has Opteron. Itanium has compatibility with a sacrifice in performance and is not intended as a desktop system. Opteron has good possibilities but is not a major player in comprison to Intel.
Try a little civility, no wonder PC users label Mac users as an arrogant lot.
Why is everyone making such a huge deal over this idea or “64-bit” computing… I could be way off… but I always thought G4 at it’s core was 64-bit, along with their “128-bit Velocity Engine” or whatever it’s called. I’m not seeing what’s going on here, someone please explain this to me. Hasn’t Apple always made strides with 64-bit+ Architectures?
I only said don’t expect a rapid transition from companies who will see little advantage to offering both a 32- and 64-bit version.
I think my argument and evidence is clear. Adobe makes about 35% of its revenue from Mac users. They shouldn’t expect a share larger than 10% to be 970 users for the first year to three years.
Does that mean they’ll wait to port to 64-bitness till then? No way, but I see little advantage to a rapid application migration considering (1) confusion produced by having two versions available, (2) small userbase able to exploit the new features, (3) an even smaller userbase of those with 970s but not hitting the 32/64 bit boundary, and (4) the cost and time to migrate while the 970, compiler, and system are being optimized to take advantage of it.
As I mentioned, those server products that just came to OS X are probably primed for it, and they already occupy a space where 64-bitness is common so these you will see migrate first and rapidly.
I meant to say in (3) an even smaller userbase considering that many 970 users will not feel the limitations of 32-bit applications. (They will migrate for the other speed improvements, DDR RAM, better bus, etc… but not need 64 bit addresses or > 4 Gigs of RAM.)
It’s faster. Someone did some benchmarks on the 970 running only 32bit apps; guess what, about 2x faster then the G4 at the same clock speed.
A 1.4GHz 970 is faster running 32bit apps then a 1.4GHz G4; why wait for the OS to be fully 64bit when you can show a 100% speed improvement? The 970 can also be made into a 4way system without any extra chips on the system; the G4 will only go 2way with extra chips. Thus, Apple could easy release a 4way 970 server which will be about 8x faster then the current high-end G4.
The upgrade is well worth it. I want one… I’ve had my eye on a 2way G4 1.4GHz tower; now I can get the low-end 1.4GHz 970 tower and be just as fast. Plus, I’ll get a larger memory expansion on the motherboard and later 64bit support!
The cost should be low; I’m expecting somethng in the same range as the current systems. The chip set is the same as AMD’s hammer so there is some volumn already in production and the 970 is about 20% cheaper to make then the current G4; so less for the cpu and maybe a little more for the chip set.
hello all..my budget is around 1500..i want to buy a laptop thats around/below 1500 that must meet the following requirements:
1) less than 6.5 lb
2) runs on battery 5+ hrs
3) not onboard integrated graphic card, radeon 7500+ preferred
any recommendations? I want to buy an ibook/powerbook but i think the MHZ are really kinda slow.
thanks..
help me out Eugenia!..i know Dell got a lot of good deals like the inspiron 1100 and 5100..or the 600M but they dont meet the above criteria…
There are many laptops around for that price that do all that and many more. check the Dell, Gateway and Compaq offerings too.
Go and have a go of a Powerbook 12″ model, see if you think it’s fast enough for you, MHz don’t mean a lot, and the only benchmark worth looking at is the task you are doing yourself.
Don’t look at iBooks though, they could suck the rivets out of a steel hull. And awful build quality to boot.
Those benchmarks are OS-independent and are theoretical. If GCC needs to be hacked to support the 970, there are bound to be additional bottlenecks in the execution pipeline.
I agree, get it out fast. I just dispute the notion that it is certainly going to be faster from the get go.
But a two month wait? No big deal. Let it roll!
IIRC, many current Macs have USB2 controllers, but they are only advertised as USB 1.1 because there aren’t drivers for OS X. This is a case of the OS lagging behind the hardware too, isn’t it?
Obviously, lagging behind the CPU is a much bigger deal (more performace to be gained is everyting) than adding faster USB drivers that won’t benefit every application.
i dont think the powerbook 12″ really last more than 5 hrs on battery….and i am a little worried about the 12″ screen..other than that it would be my 1st choice..
The powerbook 12″ has 3.5 hours of battery max, I have that laptop here and that’s what it does. If you are really after long battery life you should look at the new IBM Centrino laptop that does 5.5-6 hours.
BTW, this is offtopic to the news item here, so please don’t reply again about this here.
with or without a 64 bit os.
the proc is not the only thing the new systems will have…the FSB will be 4 times as fast (almost)…that is a speed bump….it is using hyper-transport…huge speed bump…..etc….all in all the new SYSTEM will be a heck of a lot peppier becasue the bottle necks in the places outside the proc will be reduced significantly.
No, it’s a case of the hardware lagging behind the hardware.
OS support for USB2.0 is minimal. The wait is in effect because Apple intends to push FW800 into more systems before enabling the USB2.0 features.
Don’t know what you mean by nothing new: 400 MHz DDR RAM, Hypertransport, etc… the 970 itself!!
I thought the BUS for the IBM 970 was 900MHz not 800.
See ArsTechnica for more info:
http://arstechnica.com/cpu/03q1/ppc970/ppc970-4.html#lsu-fsb
Finally is Apple coming to the same level
where SGI was for some time, before they
got stuck and lost any sense of reality and
tried to sell NT-stations.
The 64-bit PPC 970 is desparately needed for
the plans that Apple has in becoming the
multimedia hard- software provider with
Final Cut/Logic. etc., etc.,….
I don’t see anyone stating a Mhz rating on the bus, but if you check out ars, they explain that the bus of the chip scales (along with the speed of the interconnect used) “UP TO” 900 MHz.
It was presumed that it would be released at 400 or 800 (using 400 MHz DDR RAM).
Either way its gonna be faster than what Mac users have now.
For non-traditional markets like database, and 3d animation 64-bit native software is a big deal.
yes, I’m a Maya animator, I can tell you that it does help. Windows + intel xeon right now isnt cutting it for me, mainly instability issues. When I was on SGI+IRIX i never had these problems.
.. because there is not much to gain from porting to it unless you do need the extra features which frankly not many applications do.
The beauty of the 970 (and the Opteron) is that they run 32 bit apps extremely well, and there is no reason NOT to transit to a 64-bit system as long as the price is reasonable.
For the applications that really need this (and I also suspect there will be a new class of applications available for it) it is a boon.
As we have witnessed in the past, Apple is making a bid into many of the traditionally UNIX markets. The additional of a 64-bit chip into its arsenal will certainly be a plus.
Just look at some of their recent activities: buying Shake, advertising the ability to run BLAST, emphasizing Maya for Mac, their own X11 port. I think Apple’s intention to capture some of the market leaving the failing UNIX vendors (SGI in particular, but others as well) is pretty clear.
Don’t forget quark.
http://www.osnews.com/story.php?news_id=3735
I guess that means that something _is_ coming. 😉
also don’t forget emagic logic (digital audio/midi sequencer). 64 bit could help those digital plugins (like reverbs (echos), delays, compressors, equalizers) sound a lot more like the real deal (analog hardware).
If any of you Mac sledgers took the time to read anything about the 970 chip, it not only does 64bit operations but also is faster for 32bit operations without the need for recompiling or tweaking of applications. This is done by the CPU doing more work per-cycle than the current G* line of chips.
I don’t see anyone stating a Mhz rating on the bus, but if you check out ars, they explain that the bus of the chip scales (along with the speed of the interconnect used) “UP TO” 900 MHz.
It was presumed that it would be released at 400 or 800 (using 400 MHz DDR RAM).
I thought that it ran at 1/2 processor speed (well 1/4 then doubled) which would give 800Mhz for a 1.6Ghz system.
The “up to 900” would refer to the 1.8Ghz part.
This is just what I have gleaned from Ars and MacRumors
If any of you Mac sledgers took the time to read anything about the 970 chip, it not only does 64bit operations but also is faster for 32bit operations without the need for recompiling or tweaking of applications. This is done by the CPU doing more work per-cycle than the current G* line of chips.
Yeah, the P4 is deep. The G4 is wide. The 970 is wider and deep (but not as deep as the P4). There is a great series of articles about the 970 at arstechnica.
Plus, the faster memory bus (and there are two!) should greatly help Photoshop and FCP tasks. The Altivec unit is starving on the G4.
Arse.
Or some people do at least. No, think of the nightmare that would be once you had 3 or more clockspeeds out. Some would take full advantage of the bus and hence higher end RAM and others wouldn’t? Nah.
Apples strategy over the last few years indicate that they are looking to make SUBSTANTIAL movement into science and engineering. The utilization of a BSD/UNIX core, increasing support of Xfree86 with X11, and now the move to a 64-bit processor. For a long time if you wanted to run high end computer aided engineering tools, it was (for the most part) excussively on a Solaris system. Though a number of these programs have been ported to Windows, there is still a bit of apprehension about using these programs in a Windows environment. Personally, I think it would kick ass to be able to use FLUENT, IDEAS, etc… on the same box I’m writing research papers on or surfing the web. it will be interesting to see if Apple will begin targeting this market again, they could have some significant inroads here.
Considering that IBM refers to the PowerPC 970 going “up to 1.8GHz” and the memory bus going “up to 900MHz” the analysis I shared is not far fetched.
It is noteworthy that IBM always refers to the number 900 and not a single time refers to 800MHz in their documentation.
That to me says 450MHz DDR.
Don’t forget the Apple Server product-line as well. I’m sure that they’ll be putting them on the new 970 architecture when they come out too.
shut up all of you!!! Such non-sense!!!
just go buy a dell or apple and shut the fuck up!!
who gives a shit about market share for adobe applications and the opinions of mere mortals?!? I don’t give a shit and I probably speak for thousands of others!
Is this what capitalism has turned into, a bunch of whining bitches that can’t justify the existence of someone with something better than them in some regard?!? I mean for pete’s sake! Apple is a great solution for computing needs, so is Dell. Which one fits your bloody fucking cost structure, and which one doesn’t. It’s as simple as that. Quit trying to predict the industry events with your shoddy googled research, it’s fucking insane! People actually get paid for doing market research, you guys don’t, yet you do 200% of what they get paid for, I think you guys are stupid! Get a job! Go read a book! Stop polluting OSnews with your stupid opinions!
The transition for any apps developer to move to 64-bit binaries, should they decide to do so, would be relatively trivial to package along side the standard 32-bit binary.
The Mach-O binary format has intrinsic support for fat binaries which means that binary code for multiple platform branches can be included in the same unified program. PPC32 and PPC64 would be just two examples of such a fat binary.
Regardless of that the PowerPC 970 natively supports user mode 32-bit code execution, much in the same way that the Opteron CPU’s do. Unlike the IA64 based Merced and Itanium series CPU’s no emulation system is required.
So even if apps developers chose not to rebuild their apps in a 64-bit binary, which would make sense for many many apps as they would gain little, or nothing, or even reduced performance, by making an unecessary transition to a 64-bit program, 32-bit apps will still run at full speed on a PPC 970.
The biggest concern should be whether or not any serious work has been done to improving the dismal performance of the PPC branch of GCC, or better yet if Apple is going to license IBM’s VisualAge compiler suit.
I guess no one remembers going from 24-bit to 32-bit way, way back in the 680×0 days. Or migrating from CISC to RISC.
I didn’t really see a problem with application support. Not from 24 to 32. And not from CISC to RISC.
All vendors went from 24 to 32 from CISC to RISC.
It will all happen now as it did in the past, regardless of any performance boost etc.
If a developer codes for the Mac, they will use GCC3.3 when it’s available. With that, they will be able to build 64-bit apps.
Whoever hasn’t signed up for the WWDC, there’s still time left. You’ll actually learn something rather than speculate with the rest of the crowd.
Yeah! How dare you come to a discussion board and actually want to discuss stuff! Stupid noob lusers!
And, yes, swear words make an argument better! Look it up if you don’t believe me!
(thought I’d just say something before the original gets modded down
Anon:
The G4 is 32bit, despite what all the Macheads say.
The 128big velocitiy engine is mearly how big the pathway to the l2 cache is.
Hi its a bit off topic, but basically I want to know if these cpus multitask, as a mate of mine has stated emphatically that without dual/multiple cpu systems it is impossible to multitask?
To me this seems absurd but I was wondering what you people here thought as you seem to know the ppc archatechture well and many here also know the x86 archatecture pretty well also.
Also, how do the two compete in this regard? I mean which is easier to program the capability into your OS on
The AltiVec unit is a 128-bit logical unit. It has 128-bit registers, and can operate on full 128-bit words at a time.
Also if I’m not mistaken the PPC 745x and 744x CPU’s with the on-die L2 cache actually have a 256-bit datapath to the L2 cache, though that is totally unrelated to the issue of AltiVec.
The notions of a 32-bit and 64-bit processor are typically defined in the “bitness” of their ALU which defines the max native length of general purpose registers, which is also tied into secondary memory addressing as well.
the Altivec is 128-bit, but the decode and execute units on all PowerPC chips (except the terrible 620) are 32-bit. More interesting will be how many bits the memory controller supports. I know on AMD’s Opteron, the instruction units are 64-bit but the memory controller is 40 (at least for now). While that adds a lot of headroom over 4GB, it’s not the petabyte figure usually quoted when mentioning 64-bit memory addressing. The big memory support is why 64-bit is desirable – it’s why companies still buy UltraSPARC or Alpha hardware for database servers, even though the ia32 architecture may be faster. Even with PAE, ia32 can’t compare, particularly when it’s on a busy machine. PAE reminds me of the whole EMS/XMS thing from sooo long ago, only implemented in hardware.
The x86 architecture has alwasy been hamstrung by three large problems. First is a very scarce quantity of rename registers. Second is the morbidly strange x87 stack architecture. Third is a limiting interrupt architecture.
The PPC architecture suffers technically from none of these things. However it shouldn’t be seen in some way that specifically praises the PPC architecture fro avoiding these issues, because virtually no other CPU architectures beyond x86 suffer from these things. SPARC, MIPS, ARM, PPC, etc. all more or less did things the “accepted correct” way, and Intel just happened to be the oddball.
AMD “fixed” the rename registry problem, or at least attempted to address it on their new x86-64 CPU’s. However I’m unclear as to whether non-64-bit code execution can take advantage of the general architectural extentions AMD added for the x86-64 spec.
It seems like Apple will release its 64 bit machines soon. What is the most likely outcome? A few months of good publicity. By Christmas AMD will probably start shipping 64 bit consumer AMD64 processors and much cheaper (than Apple) 64 bit PCs will hit the market. Any speed advantage to Apple will be quickly overshadowed by much cheaper, faster Intel and AMD 32 bit solutions. An AMD XP 2600+ is only US$100 now and will be much cheaper ($50?) in 6 months yet it it will probably be as fast as a an entry level PPC 970 in most benchmarks.
Apple will have a hard sell explaining why its expensive 64 bit machines don’t perform better than a $250 AMD whitebox (that’s all a midrange PC will cost by Xmas 2003).
dont you guys know why there are windoze trolls on this board? BECAUSE M$ PAYS THEM TO BE HERE. no money is exchanged, but they are called ‘microsoft VIP’s’ (their term not mine) — and all they do is watch boards all day and post favorable wintel messages, while trashing linux, mac, or whoever is not MS. in return, no cash as i said, but ‘perks’ like free XP, conference passes (especially valuable to ms, as they need ‘rabid’ (read paid) fans to go nuts when ballmer does his monkeydance.)
Apple will have a hard sell explaining why its expensive 64 bit machines don’t perform better than a $250 AMD whitebox (that’s all a midrange PC will cost by Xmas 2003).
First, your definition of “midrange” is sorely lacking. Midrange means “middle of the range.” A midrange PC will always run for about $1500 because PCs tend to cost from $500-$2500 without a monitor. A $250 machine may be “fast enough” but then that was never considered an excuse for Macs being slow so it shouldn’t apply to PCs either. Face it, for $250 you’re not getting much from any manufacturer (home built really isn’t a consideration here).
Second, Apple is not attempting to compete with a $250 whitebox machine. Given recent rumors it appears that they are considering an entry into the $600 whitebox market but they will never go lower than that. The margins just aren’t there. Why mass produce and market a PC for which the margin is $10-20 per unit? That’s just foolish.
Third, price/performance comparisons are pretty difficult right now considering that we don’t know the price points for the new PowerMacs.
Why? ’cause there are more one player developing a 64 bit OS! No! You don’t know? M$ Ugly OSes. and they certainly will go 64 too. and what will happens to the graph apps? they will try to take advantage too, there are OS wars, and graph apps war too, imagine if macromedia release Freehand 64 bits, what will do adobe with illustrator? or photoshop?
just my imagination flying. but i believe, OS X with their fathers Next and BSD are going to be “THE” 64 BIT OS 😉
Steve Jobs was always interested in these areas with NeXT.
The midrange “whitebox” argument depends on what Apple puts on the motherboard. If Apple does trick the machine out by adding things like hypertransport, dual bank DDR 400, gigabit ethernet, fiber, and other nice high-end features then it will probably cost about the same as a PC with the same configuration. Besides, I’m sure most PC’s will come with only 1 cpu while the powermac will have a dual cpu machine in there someplace. Double the pleasure..double the fun.
If Apple does this I think this will be a great move. Sell the boxes first then add the OS later. That’s fine by me. I want the 970 ASAP. This will also give the developers a chance to “play” with the new hardware to squeeze as much out of it as possible.
I can’t wait until Apple updates FCP4 and Shake for 64-bits!
As Homer Simpson would say…”AHHHHHHHHHHHH”
Now all they need is a C++ API.
Yeah, I’m having a look at
http://developer.apple.com/techpubs/macosx/ReleaseNotes/Objective-C…
but that’s not exactly what I mean.
“Any speed advantage to Apple will be quickly overshadowed by much cheaper, faster Intel and AMD 32 bit solutions”
Cheaper?, yes
Faster in GHz?, yes
But as a solution to future needs of high end applications x86 is woefully inadequate and at the end of it’s life. Even Intel understands this and would like to break free of the x86 legacy with the Itainium.
The AMD 64 bit solution adds backwards compatibility with x86 as does the 970. This is a critical issue for desktop use as many if not most applications will never be ported to 64 bit.
Wintel and x86 (whether Intel or AMD) will always be cheaper and be faster in GHz, so for game-boys that will be the way to cheap out and still get a great fps in Quake.
Having said that, 64 bits is the future of computing and once again, Apple leads the way forward. AMD’s efforts will be hampered by Microsoft’s producing an OS to run 64 bits in a timely and effective way. OSX is ready now, and Panther will reign supreme as a 64 bit OS because of it’s basic foundations.
Longhorn won’t be ready until 2005 (if then) and 64 bit XP will be a cludged together OS with the baggage of Windows “legacy” to carry. To be sure Linux and the other *Nix’s will be better suited to the AMD 64 bit CPU’s and you can’t get any cheaper than “Free”
As far as being overshadowed by “cheaper and faster” x86 solutions Apple and Panther are in a class above these cheap solutions. This is a workstation, not a games console. Fps in Quake is not the measurement that will count. Render times for high end applications will be the measurement that matters most to the serious users of these machines.
Quality and engineering will make Apple’s new desktops a better VALUE than the low ball self-assembled (or Dell assembled) commodity boxes. In the end all you have is a CHEAP computer using outdated technology.
As far as FAST, the new Apple workstations are going to re-define [BOLD]fast[/BOLD], and it won’t be GHz, it will be throughput. Apple will take and hold the speed crown for the foreseeable future with their dual 970’s. If Intel or AMD encroaches all Apple has to do is add more CPU’s.
Microsoft won the desktop market-share war a long time ago. Apple is about to win the desktop speed war. Get used to it.
Aphelion …
Well, this is exactly the kind of comment someone with little or no knowledge of the Macintosh plateform can do. The real strength of OS X comes from its Objective-C frameworks. If you’re experienced enough with other plateforms (think BeOS or MFC), you’ll see the beauty of the OS services: no more fragile base class problems, easy Distributed Objects (see WebObjects as an example)…
And don’t start to argue that ObjC is slower than C++. If you really need speed is your calculation, then you can code in raw C using Altivec instructions (multimedia, 3D)… 🙂
“Given recent rumors it appears that they are considering an entry into the $600 whitebox market”
if that happens then i will never buy another PC again in my life. Why waste my money
As for the poster who thinks that Apple’s 64 bit will be short-lived PR stunt, think again. 64 bit will facilitate additional penetration in teh high-end workstation (creative) market as well as in servers etc. You are also discounting implementation criteria. Apple’s shift to 64 bit may be smoother and more meaningful to final performance than that of AMD. Apple 64-bit machines may also gain better distribution than 64 bit x86, which PC vendors after all are pushing that chip and for what applications? the chip may also decrease the “performance gap” substantially if not completely.
Come on, knowing how Apple used to announce a product 3 months before it shipped, these new G5s will likely be announced at the end of the month, but will only ship once Panther is out..
Smeagol? LOL
I’d like to see Apple Marketing do a campaign with that name, just for the hell of it
Will the OS really run faster, or programs? I thought that the 970 will backwards compatible and can natively run both 32 and 64-bit
Alpha Fox wrote:
Well, this is exactly the kind of comment someone with little or no knowledge of the Macintosh plateform can do.
Yes, that’s probably right. However, I’ve spent a good amount of time learning C++ as well as buying books and creating my own little utility libs. It’s tough to abandon that in favor of a mostly Apple-specific framework/language.
Also, I’m mostly interested in 3D and OpenGL stuff — I suspect that I’d only want Cocoa for drawing the GUI and then getting out of the way of my C++ code.
Apple has a C++ system framework. It’s called Carbon.
IOKit, the driver development framework, is also built and accessed with a derivative of C++ called embedded C++. It lacks many aspects of the C++ language which would be considered unsafe for code running in kernel space, but it still handles inheritence, etc.
The 970 does run 32-bit user/client mode binaries.
Please refer back to my other posts on this topic around the 35-45 post range or so.
1)When?
2)Where?
3)How much?
I’ve built a PC that will hold me over until Middle Earth Online debuts…so IMHO now is the best time to get back in the Macintosh community.
If prices are comparable to what’s available now, the price of the system with the new technology (hardware/OS) will be worth the expenditure.
All of this talk about a PC whitebox is pointless.
Two things.
1) OpenGL is a C library, not C++, and they are most definitely not the same thing.
2) You can work with OpenGL in parallel with your C++ code because every C program is a valid C++ program.
Interestingly enough ObjC is the same way. Every C program is also a valid ObjC program.
Also beyond that Apple’s development tools have supported Objective-C++ for a while now, which allows you to mix C++ code with your ObjC code, just like one could with their C code.
Alan:
You wrote that a friend told you that a single processor can only handle one task at a time and that you need multiple processors to multitask. What your friend said is technically true, but really it is more of a half truth. It is true that a single processor can only deal with one set of 1s and 0s at a time, therefore, it can only single task. However, modern CPUs deal with billions of 1s and 0s every second! So, what the processor does is switches between two or more sets of 1s and 0s, first reading one set of 1s and 0s and then stops, reads another set of 1s and 0s, stops, goes back to the first set, and so on. So, technically one CPU can only do one task at a time, but, to you the user, it appears as though the CPU is doing several tasks at once. With 2 CPUs, a computer can handle two sets of 1s and 0s symulaneously. Again, to you the user, there is not much difference. So, while what your friend said is technically true, it doesn’t really matter. The only reason that two or more CPUs are essential is when you are trying to do two tasks that take up the whole CPU at once, and I assure you that that is very rare at least as far as normal desktop computers go. Note that even with 1 CPU running 1 task, the computer still has to deal with 1s and 0s comming from both the OS and then application, so, the CPU is always quasi-multitasking, even with just 1 app.
Skipp
You can work with OpenGL in parallel with your C++ code because every C program is a valid C++ program.
Uh, no. Here’s one example of a valid C construction that isn’t valid C++:
typedef struct foo {
…
} *foo;
I wanted to clear up some of the confusion surrounding the performance of 32-bit vs. 64-bit applications.
Maybe the confusion is due to the fact that 64-bit processors up to this point (barring the Itanium) have tended to be very high performance machines. Except for large 2+ gigabyte databases and other programs needing large quantities of memory, 64-bit addressing is more likely to cause a performance *loss*. This would be for the same machine being compared between 32-bit and 64-bit mode.
The reason for this is that for a 64-bit program *all* memory pointers are 64 bits wide. For the same program compiled for 32-bit operation, the pointers are half as wide and therefore storage of the pointers themselves takes half as much memory.
This increased memory usage has two bad effects on performance. First, it consumes more of the data space in the processor caches which could have otherwise been used for additional data. This means that the average cost of accessing memory increases. The second bad effect is that 64-bit pointers need use twice as much memory bandwidth to load and store to/from memory. Once again, this is bandwidth that could have been used for other purposes.
To see this effect, check out benchmarks for the AMD Opteron comparing 64-bit to 32-bit operation on the same program. The 32-bit has higher performance.
As a result of all of this, I’m pretty sure that you probably don’t want all programs to be compiled for 64-bit. It is completely unnecessary.
Well, there’s always Seti@home. Mine is usually up in the 150%+ CPU usage range
Apple has a C++ system framework. It’s called Carbon.
Well, I only spent 10 minutes or so flipping through the “learning Carbon” book, but it looked like straight C to me.
1) OpenGL is a C library, not C++, and they are most definitely not the same thing.
Huh? I didn’t say they were. I merely implied that I use a C library in my C++ code. Since C and C++ are so similar, it’s not like I need to learn yet another language to do so.
Also beyond that Apple’s development tools have supported Objective-C++ for a while now, which allows you to mix C++ code with your ObjC code, just like one could with their C code.
Yup, I had a link to one of their Obj-C++ docs in a previous post.
Anyway, sorry this got so far off-topic, I just originally meant that, now with the PPC970 coming, Apple would be a very nice platform to program for (especially for my math and graphics stuff) if they only had a C++ API for me to use (instead of a hybrid C++/Obj-C thing).
It’s hard enought getting my little humanoid brain to switch between languages between projects, let alone within one single program!
So Mr. Serious, you wrote “Apple has a C++ system framework. It’s called Carbon.”
It clearly demonstrates your complete lack of knowledge about the Mac plateform. Carbon is only a C framework which, for the latest developments such as the new view paradigm, is using an object-oriented approach, just like QuickTime.
Nevertheless, if you want to start a brand new Mac program, you’re encouraged to use Cocoa instead. Carbon is here for compatibility with the past. The future is based on Objective-C.
C++ is a very good language for closed application without plugin, but Objective-C is the definitive answer for OS frameworks.
Apple needs to see the light and make Cocoa open source. Why? Because, this would allow developers to use Cocoa to target OSX, Windows, Linux, what ever someone wanted to spend the time developing for. One might question “why give Windows developers a powerful tool like Cocoa?” Because, nobody makes money developing for OSX alone.
Apple has the killer app, Cococa, and they need to use it to gain marketshare. Apple can make the fastest computers in the world, but they are not going to make a significant difference in their marketshare unless they have developers creating programs for OSX.
Apple has the killer app, Cococa, and they need to use it to gain marketshare. Apple can make the fastest computers in the world, but they are not going to make a significant difference in their marketshare unless they have developers creating programs for OSX.
OS X already has an extensive application base and an enormous development community.
Your solution is for Apple to take their “killer app,” port it to Windows, then give it away for free. Sounds like a winning play there, chief.
So, if one uses the 970 with 32bit apps than what goes into the vacant space, the other 32 bits?
I suppose Apple could do a type of “hyper-threading” approach, where more instructions could fill into the other 32 bits of the 64bit space and in so doing make full use of whats available.
I don’t know what is possible here, its just a wild idea,
maybe you could call it idle curiosity?
Nice wild idea. But I think you already know that doesn’t fly
If you’re really interested in how things work, go to arstechnica and anandtech and read up some!
OS X already had an extensive application base and an enormous development community.
I guess I seem to have overlooked the “extensive application base” when I was searching for educational software for my daughter. The few titles I did find weren’t even Carbon compliant, so I would have to run them in Classic.
As far as Cocoa is concerned, it’s all about money chief. If I can develop a Windows or Linux application with Cocoa, I might actually have a chance to convince my boss to release a Mac version.
http://guide.apple.com/uscategories/education.lasso
Simply selecting one out of about thirty categories of educational software (Language Arts (English) K-12) yielded over 300 results, the first 20 or more titles being OS X software.
If you’re walking into BestBuy or CompUSA, expecting to find Mac software, you are someone who’s not looking.
If you’d like I’d assist you with finding Mac software resources. Start at guide.apple.com and versiontracker
“First, your definition of “midrange” is sorely lacking. Midrange means “middle of the range.” A midrange PC will always run for about $1500 because PCs tend to cost from $500-$2500 without a monitor. A $250 machine may be “fast enough” but then that was never considered an excuse for Macs being slow so it shouldn’t apply to PCs either. Face it, for $250 you’re not getting much from any manufacturer (home built really isn’t a consideration here).”
Firstly I suggest you do a bit of price checking and see how little components actually cost. Secondly a a $1500 PC is not much faster than a $500 PC in most situations probably 10-15% unless you a hardcore gamer. Thirdly an XP 2400+, 512 MB DDR 333 RAM, nForce3, nVidia FX5200 is mid range in the PC world but is as fast or faster as a Dual 1MHZz Mac G4 and cheaper than a base model eMac.
“Second, Apple is not attempting to compete with a $250 whitebox machine. Given recent rumors it appears that they are considering an entry into the $600 whitebox market but they will never go lower than that. The margins just aren’t there. Why mass produce and market a PC for which the margin is $10-20 per unit? That’s just foolish.”
“Third, price/performance comparisons are pretty difficult right now considering that we don’t know the price points for the new PowerMacs.”
Apple has never been competitive on price/performance because they have never had the volume sales to reduce R+D costs. Even if Apple sells 50,000 G5s in 2003 they will have spent hundreds of dollars per machine just in development costs.
If you look at the prices of Opterons you will see they are currently about 10 times the price of similar performing AMD 32 bit CPUs. Apple isn’t going to be able to buy 1.6GHz PPC 970s for $25 a piece. The point is that the mooted PPC 970 processors are only about as fast as $100-150 XPxx00+ AMD CPUs. Intel and AMD will continue to drop CPU prices. Expect 800 MHz bus 3GHz hypertreaded P4s for <$200 in six months.
Judging from previous Apple efforts they will introduce very fast, great looking 64 bit machines that sell for 4x the price of a similar performing PC. The ‘wow’ factor will initially be huge and will quickly lose momentum. In 6-12 months expect the Apple machines to be seriously lagging in the performance stakes.
Nice guesses on performance (but totally wrong – do some research instead of making up facts next time).
Judging from previous Apple efforts they will introduce very fast, great looking 64 bit machines that sell for 4x the price of a similar performing PC.
And it’s statements like this one that remove any shred of credibility you could possibly have had. Goodbye
Looking in the “Kids Edutainment” section of the Apple Product Guide only 9 of 180 titles are OSX ready, and in the “Kids Toddlers and Early Learning” section, only 17 of 372 title were OSX ready. Roughly less than 5% of the titles are OSX ready.
A more advanced search reveals that of 17346 titles in the APG, only 6045 are built for OSX. If I search only for commercial software, there are 11065 titles, 2479 of which are OSX ready. Looking at all software, less than 35% is OSX ready, less than 23% for commercial offerings.
I imagine that the APG tracks all software titles ever registered, not just titles that are actively supported. So, even though the total number of titles available for the Mac would go down, the percentages would most likely go up.
I don’t see how Cocoa is a “killer app.” First understand what Cocoa is, it’s just an API like Win32/MFC, Qt, etc. And Objective-C is just another language. We already have Qt, wxWindows, and possibly more that I haven’t heard of, for cross-platform API and windowing toolkit. Where’s the incentive to use Cocoa? It’s just duplicated work. Sure one can argue more variety is healthy, but I don’t see how Apple would benefit from it. It would be nice of them to give it to us as open source.
If you’re looking at it as a way to convince a project manager to allow you to make a MacOS port of your product, you’re going the wrong way. Don’t hold your breath for a cross plaform Cocoa. What you should be doing is designing your Windows/Linux apps to be easily ported to OSX, use wxWindows or Qt for example at least that way you have some sense of source compatibility.
Apple does havea growing number of applications and develepors for OS X, but there are still big gaps. The vast majority of mac users are still using OS 9 either full time or in Classic mode. So, the base must gorow much larger. Perhaps the big switch-over will finally take place with Panther/PPC 970
I, for one, hope that Apple manages to outdo the Wintel world well enough to eat into the market shares of Microsh*t, and provide some room for better alternatives for the PC, such as yellowTAB’s Zeta*. The world needs to not neccessarily get rid of Microsoft, but it needs to provide some very healthy competition for the Desktop market.
I feel a nice, fast, clean, elegant, Apple computer could help bring about changes in the ways Microsoft thinks. Apple may never be able to grab more than 5% market share, but if Microsoft fears that they may be able to, they will have to act to please that market area (and that will prove difficult with current and even future Microsoft offerings).
–The loon
*In interest of full disclosure, I must say that I am a lead developer (meaning I have control of certain developmental aspects/teams) for yellowTAB on the Zeta OS, based on BeOS Exp/Dano 5.1xx.
I feel a nice, fast, clean, elegant, Apple computer could help bring about changes in the ways Microsoft thinks. Apple may never be able to grab more than 5% market share, but if Microsoft fears that they may be able to, they will have to act to please that market area (and that will prove difficult with current and even future Microsoft offerings).
Steve Balmer said in his annual letter to employees that Linux is the biggest challenge that Microsoft faces. But maybe the PPC970 will change that.
Well If this news story is true, what will probably happen is that these new apple computers will feature the new Jaguar build Smeagol, as previos versions of X won’t be compatable with the new 64 bit 970.
Smeagol will run just the same as Jaguar. Then Panther will be released and will able to run on both the 32 bit G3, G4 systems and on the new 64 bit 970 systems. And Panther will be fully compatible with 32 and 64 bit applications.