AMD executives reiterated that the company is focusing on the server version of the Opteron processor, but that the 64-bit Athlon 64 will still appear in 2003.”You saw our financials; I’m not going to lie to you. It doesn’t make a lot of sense to build a new processor for a niche market,” said John Crank, senior brand manager for desktop product marketing for AMD’s Computation Products Group. Although PCs certainly dominate the computing landscape, Crank and other AMD officials said they believed servers and their applications would be better positioned to take advantage of the Opteron’s capabilities at launch, rather than PCs.
I don’t understand, I just need to recompile the apps to take advantage no ?
That my fellow readers is the beauty of opensource
anyway
So I guess he’s saying that it will be a few cents cheaper than an itanium at launch, big surprise there
I’m still waiting to get my 64bit desktop.
Could be worse, you could have to buy a new version of windows to take advantage of it. 64 bit windows xp is probably not too cheap..
That my fellow readers is the beauty of opensource
With it’s fairly good backwards compatibility, and Intel haveing no imediate plans for 64 bit a desktop, the Athalon 64 could help AMD make huge inroads to the desktop PC market. After all, backwards compatibility is a huge reason why most people are using x86 and Windows. I am however a bit more skeptical of them being able to take on the server market.
Considering that a lot of the software on the market doesn’t rewally take advantage of 32-bit, 64-bit sounds like a waste for everyone except the number crunchers and those in research circles.
Too bad. I really wish AMD were doing better.
Kaneda Langley:
I don’t understand, I just need to recompile the apps to take advantage no ?
That my fellow readers is the beauty of opensource
And a closed source company could always just recompile their apps. And since some companies don’t charge for every minor upgrade (some don’t charge for upgrades period) I could very easily get upgrades for a number of my closed sourced programs for free… So… Your point was what exactly?
You can run your existing 32bit apps in compatibility mode, but to take full advantage of a 64bit processor could involve some rewriting. Some apps may require a fair bit of work to even compile for a 64bit target, it depends on the code.
Deletomn:
And a closed source company could always just recompile their apps. And since some companies don’t charge for every minor upgrade (some don’t charge for upgrades period) I could very easily get upgrades for a number of my closed sourced programs for free… So… Your point was what exactly?
Redhat and SuSE both had full distros running on Opteron and Itanium when all MS could do was boot into windows and shut back down. None of us know all the code in the secret sauce that MS uses, but they may be very dependent on many of the features in a 32 bit processor.
That my fellow readers IS the beauty of opensource
as long as I can keep buying cheap low-end to mainstream athlons undercutting Intel’s prices, I’m happy. I’m in no rush for 64 bit buzzwords, just as I was in no rush for 32 bit. Gimme cheap n cheerful.
Im personally looking forward to the new Athlons. First, backward compatibility will be absolutely perfect. Just like when intel made the jump from 16 to 32bits, AMD has not changed the base architecture from what it is now. The only way things have changed is upward. For instance, most speed up noticed will probably be from the fact that you now have 16 full 64 bit GPRs as opposed to the 8 current 32bit, where only 4 are useful. This will make the architecture more RISC like if you ask me, but it will still support mixed memory/register instructions, so you almost get the best of both worlds.
Second, NO MORE SEGMENTATION!!! Has any OS properly used this yet? If you look at linux/bsd code, they set the segments to be a flat addressing mode anyway. If you read the intel developer manuals, it’s like they said, lets take every feature EVER thought of and put them into one processor. Maybe windows was designed to use it, but no design i’ve been able to see the inside of has used it appropriately.
Personally, from an OS standpoint, I think the new processor will provide a much better environment than current x86 processors, and most of the infrastructure is already in place with the OSS OSs. I still dont like x86 because it’s such a hodge-podge throw feature on top mess, but this looks like AMD is trying to clean things up. Linux, glibc, and FreeBSD are working on it, and they should be AMDs poster child for performance.
Thats my 2cents.
This is how you must read the story backward :
” Barton processor, officially due in February, will initially ship using a 3000+ model number, with a 3200+ followon due to ship in the middle of 2003. Reports have pegged the Barton’s launch for Feb. 10.
“Everything is on track,” Crank said, backing up reports that have said AMD will have adequate supplies of the Barton when it launches next month. ”
nothing New realy Opteron’s for servers (Q2)
and AXP64 (256k cache level 2 for ) consumers in Q3-4
And you can download the linux kernel now for the hammers
also the nvidia drivers too Compare this to Microsoft …
Will I be able to do the usual PC update by buying a
new ASUS board, a new 64 bit processor and possibly some new RAM modules over at some discounter for some reasonable price?
If not I doubt it will replace my exisiting 32 bit systems..
Regards,
Marc
While in theory you can just recompile your apps and they work on a 64bit processor, you’d be surprised how many apps this doesnt work for. This is mainly due to the programmer not thinking ahead and taking for granted that such things as pointers are always 4bytes.
I used a 64bit box (Dec Alpha, damn i miss it) for over a year, and I got to see first hand how many packages did not run properly on a 64bit box.
Jonathan Belson: You can run your existing 32bit apps in compatibility mode, but to take full advantage of a 64bit processor could involve some rewriting. Some apps may require a fair bit of work to even compile for a 64bit target, it depends on the code.
Yes, I know. But this applies to both open source and closed source does it not?
SeanParson: Redhat and SuSE both had full distros running on Opteron and Itanium when all MS could do was boot into windows and shut back down. None of us know all the code in the secret sauce that MS uses, but they may be very dependent on many of the features in a 32 bit processor.
That my fellow readers IS the beauty of opensource
And I suppose all closed source programs vs open source programs contests are this way?
BTW… I’m not saying Open Source is bad. I just fail to see the supposed advantage here. If the source code (regardless of whether it is open source or closed source) requires some work, then it requires some work either way. If it’s just a simple recompile, then it should be a simple recompile either way. The compiler isn’t going to say… “Oh… You’re an open source program… I’ll let you be 64-bit. Oh… You’re closed source… (bad word here) you… I’m not going to let you compile.”
And if some code requires work, closed source programmers aren’t nessecarily going to say, “Oh… I’m going to go on vacation for the next 10 years…”
>Yes, I know. But this applies to both open source and closed source does it not?
Of course. They are both “code”.
If there is one thing that I do not like about the x86-64 design it is the removal of segments in long mode. I can not se why AMD decided to do this as the hardware support must still exist for 16/32bit compatibility, and the segment overrides are still defined in long mode(as no-ops so the opcode space is not reused).
Segments makes it possible to design a fast and lean IPC system and still make it secure. The L4 microkernel uses it, the Go! microkernel uses it and I used it in my previous kernel designs…
I personally prefer OSS, but I am not going to tell everyone to switch to GNU/Linux and FreeBSD if you are either happy or dependent on proprietary operating systems (i.e., MS, Apple, Sun, etc.). The reason that OSS will have a smoother transition is that the code has usually been compiled for multiple platforms already. The potential compiling problems have mostly been addressed. In this respect, OSS has a distinct advantage.
That my fellow readers is the beauty of opensource
SeanParsons:
I personally prefer OSS, but I am not going to tell everyone to switch to GNU/Linux and FreeBSD if you are either happy or dependent on proprietary operating systems (i.e., MS, Apple, Sun, etc.). The reason that OSS will have a smoother transition is that the code has usually been compiled for multiple platforms already. The potential compiling problems have mostly been addressed. In this respect, OSS has a distinct advantage.
That my fellow readers is the beauty of opensource
Closed source programs can be developed for multiple platforms as well. For these programs, their potential compiling problems have also been mostly addressed. Is that the beauty of closed source then? No…
Once again…
As Eugenia said…
“They are both “code”. ”
And if code has been written with different platforms in mind, then it should have an easy transition to 64-bits regardless of whether it is closed source or open source.
The only way things have changed is upward. For instance, most speed up noticed will probably be from the fact that you now have 16 full 64 bit GPRs as opposed to the 8 current 32bit, where only 4 are useful.
This is the #1 complaint proponents of proprietary RISC platforms give against x86: lack of registers. This taxes cache and front side bus bandwidth as values must be loaded in and out of cache and main RAM much more frequently due to a lack of registers. This is especially problematic on the P4, which is further starved for bus bandwidth in that a mispredicted branch can result in the trace cache being filled with the wrong microops, so the original code must be loaded from main memory (often times repeatedly, though this problem was fixed by improved branch prediction and increased front side bus bandwidth)
An increase in registers will help to bring x86 closer to clock-for-clock performance levels with (pure) RISC chips. Since x86 chips have already decimated every RISC platform in terms of raw clock speed, they can only emerge from this a clear winner.
Segments makes it possible to design a fast and lean IPC system and still make it secure. The L4 microkernel uses it, the Go! microkernel uses it and I used it in my previous kernel designs…
Perhaps it’s because AMD and the rest of the world don’t give a shit about microkernel architectures. The only instance for which IPC is necessary that comes to mind is a GUI. Windows solved this by making drawing calls in Win32 system calls. OS X solved it by having applications draw to shared memory, while XFree86 provides a similar shared memory solution for passing pixmaps to the X server. While Microsoft’s places a lot of code in kernel space, I believe it has proven the most effective solution in terms of GUI performance/responsiveness and reducing the number of context switches which occur in a GUI environment.
The performance of Mach in OS X is virtually meaningless in terms of raw GUI performance as it’s only used for event notification and synchronization, both of which require relatively low throughput.
Ultimately, there really isn’t a need for throughput between processes, and people care more for filesystem and network throughput, two areas in which a “pure” microkernel environment tend to lose when compared to a monolithic kernel.
It’s time to face facts, everyone on x86 uses a monolithic kernel, and therefore have no need for segmentation.
The problem with Open Source is that YOU have to be the one to do the compiling and extra work or HOPE that someone else does it for you. How many apps on sourgeforge are going to get that effort applied to them??? What is the guarantee??? When will it be done???
With closed source, the market provides a mechanism (upgrade fee – companies are always on the look out for ways to get this) to get apps upgraded. Unlike open source where YOU might have to pay someone (or do it yourself) to upgrade the code, in the closed source world the cost of upgrading is __split out__ over the hundreds/thousands of users who pay the nominal upgrade fee.
I don’t think the closed source world will be wanting for 64-bit apps since it represents a money stream. In the open source world, the popular apps will get the upgrade and anything else will be a crap shoot.
Outside of the popular areas of software, the open source world sucks — there are tens of thousands of niche players that produce god-awful boring software that gets the job done in unusual markets (trucking, logistics, farming, chemical plants, etc.). These are pure business plays and the ability to use the market to split costs of development and upgrades is essential. Also, with the market the ability to pressure firms to get things done is there. With open source it is always a hit-or-miss thing.
If the world is to get the most out of its people then division of labor is essential otherwise we all waste our time doing things that others should be doing. I don’t want to force a brain surgeon to spend countless hours trying to figure out how to upgrade his/her software!! Allow a software company to serve that market niche and everyone can then concentrate on their specialty.
Money as the medium of exchange works…and as an incentive it works great…especially when there is competition!
I supposed if you are a student or unemployed or always have nothing to do then free software is fine since it appears that it is designed for people whose time is considered FREE.
new toy.
always fun to play with.
😉
Is that you don’t have to wait anyone to release their 64bit applications. You can compile away yours. Big companies are usually very slow to adopt new technology (guess how many of your programs use mmx for block memory reads and writes?) Even if they want to embrace the technology, they may have many reasons why they won’t or can’t. They might not want to piss off intel, they might want to use the opportunity for marketing of next version which won’t be available for quite some time, they might lack resources at the time… In short, they are out of your control.
If your kernel and compiler runs, well written code should run too, whatever the changes in the hardware. If a particular code doesn’t compile/run in 64 bit mode and you just can’t live without it, you can always fix it. We know for a fact that our kernel and compiler runs and runs well on x86-64, we also know most programs we use are compiled on variety of platforms, so they must be well written – at least as far as portability goes. (I’ll return to this soon)
Windows is 64bit clean too, but do you have any guarantee at all that windows software are too? In fact at least some, drivers, are definitely NOT 64bit clean. My printer doesn’t have a windows xp driver even though it is a mid-range laser printer from panasonic. I wouldn’t count on those people for quick release of 64bit drivers. Visual C++ suite isn’t known for its portability (0 ports), neither is MS office (just 1 port.) Companies working on windows only software had NO incentive at all to code their applications in a way to ensure operation on 64bit platforms, it would be surprising if they indeed bothered.
Most OSS programmers can’t care less for portability either, but the process is almost automatic. ‘A’ writes ‘P’ for x86 Linux making several assumptions in the process, ‘B’ likes ‘P’ and ports it to PPC Linux, ‘C’ being the FreeBSD guy, wants everything on Linux work natively on FreeBSD too, so ports it, ‘D’ wants to try it on cygwin… Voila! Many assumptions about byte order, word size, existing libraries etc. are now removed from the original ‘P’. If you want to port it somewhere else, it will be very easy and your porting will facilitate future portability too. The critical requirements are that P is opensource (or else B can’t port it) and P is a nice program (otherwise B doesn’t even bother.)
You can proofe me wrong, but:
if you recompile a 32bit app, all you get is a 32bit app runing in a 64bit environment and not a real 64bit app.
imho
You will have all the extra registers, SSE2 and 64 bit pointers to manipulate gazillion bytes of memory. What else were you expecting from “64 bit app”s?
Everything else being equal, and using less than 4Gb memory, 64 bit programs should run slower than 32 bit programs on average. There may be some applications especially like 64 bits (like the ones naturally involved with more than 32 bit, less than 64 bit representable quantities) that would run *slightly* faster. Those (and only those) programs need a rewrite to take advantage of 64bit registers. OTOH there may be others really hate 64 bits because suddenly they don’t fit in cpu cache, which would make them run a lot slower. Your average program wouldn’t be affected much but will be slightly slower due to wasted memory bandwidth and slower arithmetic operations.
Fortunately, x86-64 isn’t just 64 bit ia32, so “everything else being equal…” doesn’t hold. All the additional advantages are available with a single recompile though.
I’m not surprised that AMD chose to push Opteron in the server space first. They have limited fab capacity capable of Athlon64/Opteron production (SOI). Opterons will have much larger margins than Athlon64s. Also, most of what I’ve read indicates that Opteron will have a tough time competing with the P4 in normal desktop apps. However, in server and workstation apps, the Opteron will be extremely competitive. It is here that the 64-bitness will matter and here that the ease of making multiprocessor systems will matter. In single processor systems on a 32-bit OS, the only real advantage the Athlon64 provides over Barton is its lower memory latency. Barton will do ok in the low-mid desktop, but P4 will control the high end desktop. I expect the Opteron to be price competetive with the P4 Xeon (much lower price than Itanium). 64-bit Linux and Windows availability (server/workstation OSes, not desktop) will be very important to Opterons initial success.
Are people still writing code that is not portable from 32 to 64 bit?
Deletomn – having the source just gives the user and the developer the advantage of trying to recompile an application before a vendor decides that a port is necessary. Plus, pretty much as soon as a new architecture is released and the linux guys get their hands on it, you know that they are going to try to bring up their kernel.
As far as 64 bit goes, the benefits are wasted on the average user right now. Google it if you don’t believe me.
A few years ago, I’ve read:
Why would we want graphics accellerator, 3d looks shitty, most games are 2d anyway.
Why would someone need more than 1MB ram, if you can’t make your program work with 1MB, it just means you are a bad or lazy coder.
Why would we need more than 1GHz… – hope you get the point.
If you’re right, why do all highend workstations, game consoles, dsp’s, gpu’s, vpu’s etc. more than 32bits???
“I’m not going to lie to you. It doesn’t make a lot of sense to build a new processor for a niche market,”
how again is the vauge?
Except for bragging rights, I don’t see any reason for 64-bit for the desktop market. One thing for sure, it wouldn’t be cheap. And 64-bit wouldn’t make apps run 2x faster, contrary to popular belief. However, they probably have a better chance at the server market, if they target the low end and mid end.
As for the desktop, all I want is Barton. Release that, I’m happy.
1) Because 2GB of PC2700 (pretty much the max memory config for 32-bit processors) costs $300.
2) To enable cool single address space or persistent object store operating system designs.
3) To make mmap() useful again for 2GB+ files.
4) Because most everything else in the proc is already at least 64 bits (64/128-bit load/store, 128-bit SIMD, 80-bit FPU), so expanding the addressing and integer ALUs to 64-bits costs very little. Even my PocketPC processor (MIPS VR4112) is 64-bit. It’s really not an inherently expensive feature.
PS> On the point of recompiling closed source software. Don’t you remember how long the 16 -> 32 bit transition took? Do you really believe that developers have gotten that much smarter since then? In comparison, OSS developers are already used to writing portable code, because the UNIX OSs they target run on many architectures. Do you know how much work it took port Gentoo (base system, X, GNOME2) to the Alpha CPU? One developer, 2 days. That *is* the power of Open Source!
Actually, Bascule, most x86 CPUs already have more registers than most RISC chips. The Pentium 4 has 128 internal registers, compared to the G4’s 48 internal registers. The CPU automatically maps the visible registers to the internal ones to keep the impact on main memory minimal. The main issue is that its harder for the compiler to optimize its code with only 8 visible registers. With 16 visible registers, the optimizer in the compiler can convey more information to the optimizer in the CPU. This is probably why there are only 8 more visible registers, because 16 seems to be enough to reach fairly optimal code generation.
…has been available since 1993-ish. NT was a multi-architecture OS, and Alpha was probably its second-most popular platform. According to hearsay, M$ continued in-house Alpha development after NT 4 as well, just to make sure it remained 64-bit compatible.
Of course, combining this with x86 luggage is another thing. But I really think that this whole idea stinks. We should bury the Intel x86 platform once and for all. No-one liked it anyway. IBM and NEC were the only ones silly enough to use it.
Actually, Bascule, most x86 CPUs already have more registers than most RISC chips.
And actually Rayiner Hashem, I am aware of this, and the problems associated therewith, such as partial register stalls associated with the PIII, which resulted in the need for the entire pipeline to be cleared and all commands in the pipeline re-executed, and also resulted in all PIII optimized code being compiled with a workaround for this problem, further bloating x86 code.
This does not fix the problems I had discussed, but only allows for parallel command execution.
The main issue is that its harder for the compiler to optimize its code with only 8 visible registers. With 16 visible registers, the optimizer in the compiler can convey more information to the optimizer in the CPU.
The problems are exactly what I described, overreliance on cache and main memory thus oversaturating the front side bus.
This is probably why there are only 8 more visible registers,
No, that would be for legacy compatibility.
because 16 seems to be enough to reach fairly optimal code generation.
Which is why AMD is moving to this in x86-64…
“And if code has been written with different platforms in mind, then it should have an easy transition to 64-bits regardless of whether it is closed source or open source.”
Assuming the developer of the closed-source application takes the time to port or recompile, you are correct. However, perhaps they went out of business, feel there is no demand, moved to a different architecture, etc. Prime example: BeOS.
With opensource, if nobody else ports or compiles your favorite app to your favorite architecture, YOU CAN because YOU have access to the code. THIS is the power of opensource; the application (or os, or whatever) will not die as long as someone wants to continue developing.
And of course the weakness of some “free as in beer” (not free as in “freedom”) opensource projects is that people tend not to pay for something they can get for free. Prime example: Mandrake Linux.
Competition for the pentium 4 is a good thing. Not sure how these new chips will compare against 3+ghz hyperthreading. At 0.13micron it should run cooler and faster.
See http://www.emulators.com/pentium4.htm for some interesting reading on the intel/AMD duel (with benchmarks).
Here’s a fairly good article on the Athlon 64:
http://www.icrontic.com/?action=article&id=381&page=1
The author does seem prone to slip back into 64-bit hype mode every now and then, but otherwise it is a good article, especially if you are not familiar with the intricacies of processors and AMD’s x86-64 chips.
<< Of course, combining this with x86 luggage is another thing. But I really think that this whole idea stinks. We should bury the Intel x86 platform once and for all. No-one liked it anyway. IBM and NEC were the only ones silly enough to use it. >>
OK, but what other cheap platforms are in common use without alike withdraws (right words? I hope so) ?
As someone else pointed out (but it’s a point I want to emphasize): the real advantage of open source is that you’re not at the mercy of one specific provider’s agenda.
Computers can be used and configured in a lot of ways that a software provider didn’t think of, or didn’t want to or can’t invest time in. I can think of many examples:
– using Linux on Macs and not having Nvidia PPC drivers (because of little demand or political motives)
– Macromedia’s flash plugin on different architectures or platforms (again, is there a LinuxPPC version?)
– Closed source soft on Linux with glibc breakage…
etc. etc.
Fact is, just because some new nice processor comes out, does NOT mean the app vendors will bother to support it, or to even create an “unsupported binary” – even if all that’s needed is a simple recompile.
And, over the years I’ve been bitten several times by these issues – mostly with driver support, but occasionaly with other things.
I understand economic realities, and see why for instance Nvidia’s driver (which is very performant, and very compliant) stays closed – considering all the manpower that went into that thing. But why should that make anyone who’s still “unsupported” feel better?
Isn’t the deal behind the Athlon 64 it’s 32 bit backwards compatibility? I think AMD said that the Athlon 64 would run 32 bit applications 15% faster than a similarly clocked 32 bit Athlon. For windows users like myself, the change from a recular athlon cpu to an Athlon 64 shouldn’t be noticeable (other than the knowledge that the cpu has a lower clock speed, but a higher pr rating than a similarly clocked 32 bit athlon). I may be wrong, but until a 64 bit version of windows is released, windows users with athlon 64’s won’t even be able to run 64 bit applications.
There’s going to be one slight problem with any 64 bit windows and that’s what do you do about dos emulation.
Even though it supports the greatest compatibility with ia32 in long mode than any other processor (even ia64), to use v86 mode, you have to switch back to legacy mode to use it. This is a difficult constraint on the OS. You would have to make sure that the page tables for a v86 process reside < 4gig so that you can switch back to legacy mode, and you’d have to tweak the scheduler to take account of those processes to warp the OS into that mode every time you needed to use it. I think it can be done, but the AMD folk have hinted that switching between long & legacy modes is rather a performance hit.
And before you say, who cares, try to consider that some OSes may need to access bios or VBE functions that typically only run in v86/real mode. I would suspect that Windows might need to support such a mode still if they don’t want to invoke the wrath of customers.
What do others think? Should a 64 bit windows drop v86 mode and dos/dpmi emulation completely, or support it by either bochs style emulation or warping the CPU into legacy mode.
P
@ Chris:
> Are people still writing code that is not portable from
> 32 to 64 bit?
People are still writing code that is not portable from one 32bit architecture to another. People are still writing code that is not even portable from one *compiler* to another. Welcome to the real world, where commenting source code is frowned upon and C++ as a language being dismissed because of “bad performance”…
😀
Erm, guys? Isn’t this the thread on a *hardware* related issue? Why is everybody jumping at any opportunity to start *yet* another Linux/OSS vs. Windows/CSS flamewar?
This is tiring.
if 16 is better than 8 why didn’t amd offer 32 new GPR rather than just 16?
i’m surprised that powerpc did not deliver the promised 2x in price performance over CISC that was hyped 2 years ago.
********
This is the #1 complaint proponents of proprietary RISC platforms give against x86: lack of registers. This taxes cache and front side bus bandwidth as values must be loaded in and out of cache and main RAM much more frequently due to a lack of registers. This is especially problematic on the P4, which is further starved for bus bandwidth in that a mispredicted branch can result in the trace cache being filled with the wrong microops, so the original code must be loaded from main memory (often times repeatedly, though this problem was fixed by improved branch prediction and increased front side bus bandwidth)
An increase in registers will help to bring x86 closer to clock-for-clock performance levels with (pure) RISC chips. Since x86 chips have already decimated every RISC platform in terms of raw clock speed, they can only emerge from this a clear winner.