We’ve been able to drop the world of 32bit for a while now, with 64bit processors and support for them being prevalent in all popular, modern operating systems. However, where Mac OS X and Linux seem to make the move to 64bit rather effortlessly, Windows has more problems. Even though 32bit applications should run fine on 64bit Windows, some don’t; and to make matters worse, drivers need to be 64bit, as there’s no support for 32bit drivers in 64bit versions of Windows. Still, Gizmodo claims that with Windows 7, the time is right to take the plunge. But really, is it so? And why do Linux and Mac OS X seem to handle the transition so much easier?
The biggest problem with 64bit Windows is that kernel-mode drivers need to be 64bit as well. While newer hardware usually has 64bit drivers, hardware that is slightly older usually does not, and this is where the problems start: your favourite piece of hardware simply won’t work. User-mode drivers can be 32bit, by the way.
Mac OS X circumvents this issue by running the kernel in 32bit, allowing 32bit drivers to run without any problems (the kernel in Snow Leopard is supposed to be 64bit). The userland applications run in 64bit, however, so users still get many of all the benefits. The Linux situation is different; here, the advantage of portability and open source come into play: drivers are simply recompiled to support 64bit.
The second problem is 32bit applications. While 64bit Windows is perfectly capable of running 32bit applications, you might still encounter problems, which can be quite annoying. Especially if you rely on certain applications, its 64bit support is something you should take into account.
Mac OS X has all of its important frameworks in 64bit (Cocoa, Quartz, OpenGL, X11), and thanks to the concept of fat binaries, you really needn’t worry about what version you download. In the Linux world the transition to 64bit once again benefits from the open nature of the operating system and its tools. While open source may have its downsides, there is no denying that in this case, its strength is pretty obvious.
According to Gizmodo, the time is right to move to 64bit with Windows 7. I personally faced this choice with my new computer, but I decided to stick to 32bit for now when it comes to Windows because I’m not that much of a performance junkie (I don’t think Miranda really benefits from 64bit), and I didn’t want to be bothered with its potential problems.
So my question to you is: have you ever faced this choice? What were your arguments to go one way or the other? What are your experiences running 64bit Windows or any other 64bit operating system? What major applications or hardware parts failed for you?
Once tried 64-bit Ubuntu (for its “user friendliness”) but hated it because it was very unstable with KDE4. Then moved to my favourite distro Arch, but it got too little packages in x86_84. Now back to 32-bit Arch waiting for another time.
64-bit Ubuntu is stable now. I’m running it as I type this.
There is a beta version of Adobe Flash driver for 64-bits that really helps here … the stability of this is great.
As the article mentioned … 64-bit drivers aren’t a problem in Linux, because of the access to source code.
I don’t think that the instability was caused by 64 bit packages.
Ubuntu ships broken KDE packages. That’s more likely the reason.
More like KDE4 was broken to begin with.
I am using 64 bit Fedora 9 with KDE 4.1.3 on a machine with 16 GB RAM.
Works well, that’s all I can say about it. I already utilized the memory when I computed a FE analysis with CODE-ASTER.
However, I would go with 64 bit only if the Machine has 4 GB of RAM or more.
I’ve been running Ubuntu 64-bit for a while now, and found it to be entirely stable. No crashes at all on 5 physical computers and many more VMs. Infact, that’s more stable than any other operating system I’ve used, Linux or otherwise. Though, it’s probably because of the AMD/ATI drivers being far more stable than before .
Also, I’ve not encountered any issues that are related to the 64 bittedness of the OS, unlike Windows Vista 64 which I’ve encountered several even though it’s probably got a cumulative uptime of about 15 days.
The x86_64 version of Ubuntu 8.04 and 8.10 are stable here (Including Nvidia 3D drivers).
If you had many issues with the 6.x versions of Ubuntu it might be worth giving it another chance as it has improved quite a bit. YMMV*
*I hope that makes you happy anecdotal police.
Ubuntu, Ubuntu, again and again…
The first x86_64 distribution was Red Hat Linux 9 (a technical preview). Next was FC1. Just after was RHEL 3 (full support, not a technical preview). SuSE also did a good job. Long long before Ubuntu.
Smolt (mostly Fedora right now) :
http://smolts.org/static/stats/stats.html
x86 : 73.8 %
x86_64 : 25.7 %
Edit : from the begining of x86_64, rpm support mixing 32 bits and 64 bits applications.
Edited 2009-01-22 00:33 UTC
If you tell us about your experience with fedora x64 you will be on topic, like the people telling us about their ubuntu x64 experiences are.
Been running ubuntu x64 for a while (using 8.04 atm, might have started with an earlier one) with no trouble except npviewer.bin still sucks (which is itself irrelevant, due to the x64 flash beta), not that I use flash much.
I switched because of a measurable performance increase in video decoding – about 25% as I recall.
You could run 64 bit Linux and have a 32-bit distribution in a root jail. I’m using this setup in 64-bit Kubuntu 8.10 and schroot for entering the jail. Work like a charm. I have 32-bit skype which can use the camera handled by a 64-bit driver! I find it amazing. Kudos to all developers that make this possible. The only other 32-bit application that I use is Adobe Reader. Unfortunately, it is still the best PDF reader available. But it’s not a problem with schroot. Google for it, install 64-bit Linux with a 32-bit distribution in a root jail and enter a world of bliss Caveat: you might need to fiddle a bit with schroot’s configuration files for best results (like copying your sudoers file to the root jail’s /etc upon entering the jail; IIRC I also had to add /dev to the automatic mounts) but hey, that’s part of the fun!
Weird that they don’t mention Solaris. It will run a 64-bit kernel if your hardware is 64-bit capable, and run 32 bit otherwise. And also mixing of 32 and 64 bit apps is no problem there.
The reason they dont talk about Solaris (which has been 64 bit many years back) is maybe because they mostly talk about Windows and other non Enterprise Server OS?
That of course if you don’t consider OpenSolaris into the mix…
OpenSolaris is also a desktop oriented OS and should be considered, given that it has many advantages when comparing with the rest of the OSs
Solaris is a perfect example of how stable a 64 bit OS can be.
Uh? bitness and stability are orthogonal, so I don’t see why you’re linking both..
Of course 64bit Solaris is stable: it has been 64bit since a long time (before AMD64 even existed with the SPARC) so it’s a mature feature plus as Solaris is a server OS, its developers have always put a big emphasis on stability..
Solaris works fine as a desktop. I’m not the only one that uses a Solaris workstation daily and who’s perfectly content with it. I don’t see why it shouldn’t be considered. The design of the OS and how it deals with 64-bitness is perfectly valid.
We have used XP 64 over one year, and with remarkable little problems. Att home I have used xp 64 for more than two years, and I also use Debian linux(sid) 64 bit version
“We have used XP 64 over one year, and with remarkable little problems. Att home I have used xp 64 for more than two years, and I also use Debian linux(sid) 64 bit version”
I think you meant “remarkablY” here, not “remarkable” It must be the funniest mistake I saw this year!
Edited 2009-01-21 12:46 UTC
“I have seen” is better English.
Normally I wouldn’t correct peoples grammar, however I hate hypocrisy.
If you’re going to be pedantic about other people’s spelling and grammar, then please ensure your own posts are spotless.
Edited 2009-01-21 17:00 UTC
Well, maybe you should take your own advice. Note the two different spellings/uses of “people’s” in your post.
You’re exaggerating somewhat. The spelling and usage are the same however I just hadn’t included an apostrophe in the first instance.
However this is irrelevant as you’ve somewhat missed my point. I wasn’t out to correct the previous post but rather make a point (by example) that nobody appreciates a language pendant – particularly when the error is such a minor one.
Maybe I should have been more direct in my original post but I thought what did was politer than simply mod’ing down
Edited 2009-01-21 20:57 UTC
politer ???
This one will run and run !
I doubt that:
http://en.wiktionary.org/wiki/politer
http://dictionary.reference.com/browse/politer
Being dyslexic – I made sure I spell-checked before posting
Moving into the “more and more off-topic” zone, but I can’t stand the American English dictionaries … it’s like they’ve dumbed down the language to the point of almost being German in their spellings of too many words. Similar to that ancient Internet joke about standardising (yes, using a “z” in there is also wrong) the English language over the course of 7 years, where the final result is essentially “phonetic German”.
“Politer” is just wrong. It’s like saying “wronger”, or “funner” or “gooder”. The correct usage is “more polite”. Similarly, “more wrong” or “worse”, “more fun” and “better”.
But, that’s a debate for another day.
And I think you both fail to realize that the original correction made was for the sake of pointing out something funny – that “remarkable little problems” is far, far different than “remarkably little problems”. malfarot was not necessarily pointing out the mistake for the sake of pointing out the mistake, but rather for the sake of pointing out something humorous.
To summarize: *whoosh*
I got his ‘joke’ – I found it pedantic and not funny.
Anyway, I think this tangent has ran on long enough now.
Not that this is at all important, but I couldn’t help but notice that “peoples” needed an apostrophe. Normally I wouldn’t correct you seeing as how your punctuative and grammar skills are stellar when compared to others added to the fact that I never correct anyone, but the post was a little ironic if you see my point.
Edit: Whoops. Looks like several of you beat me to it a long time ago. PS– I haven’t heard “politer” used much, but it’s an actual word, and if it wasn’t one, I’m personally all for making up new words (as long as they have a rhetorical effect, that is).
Edited 2009-01-22 05:35 UTC
Several of us try to do our best when writing something in your language, so, be tolerant with us because we are in foreign arena…
Edited 2009-01-23 17:58 UTC
Either you are not telling the entire story or you do virtually nothing on you windows 64 machines. The 64 bit versions went through a period when it was so unstable unreliable or hardware was just not supported; in all three instances it was largely because of drivers. Don’t get me wrong . . . I make six figures because of Microsoft so I keep my complaining to a minimum. But when people say they went to MS 64bit and had no problems there is a reason as stated in the beginning of my post.
XP x64 gets confused by DVD-RAM optical drives…especially the newer ones. Also there is currently no painless way of getting VPN (Cisco) to work on that OS. Other than that, and the fact that a lot of electronics manufacturers do not make drivers for that OS (Canon, et al) I think its a damn fine OS. There are alternatives to all those problems I mentioned but still its a solid OS. It has gotten a lot better than the situation was 2 years ago for that OS that is for sure.
You can’t access a full 4GB of ram in a 32-bit kernel because of memory mapped peripherals (this is becoming a bigger and bigger problem as time goes on, modern video cards can easily use 512+ MB of address space). So, using 4GB of ram fully is a pretty good reason to run a 64bit OS.
I’ve personally been running 64bit debian for about a year now. I used to run a 32 bit browser in a chroot, but since we have 64 bit flash and 64 bit java plugin (latest beta), I don’t need to anymore.
Unless your BIOS allows full access to that 4GB, it doesn’t matter whether you run 64bit Debian or not.
I’m running 64bit Debian and I’m stuck at 3072MB of RAM because the BIOS reserves the rest.
Since ASUS won’t offer an update to flash the bios I’m stuck with it, until I upgrade the hardware to a bios that doesn’t have that obstacle, out-of-the-box.
Er, no. PAE has allowed access to >4GB in 32-bit processors for ages now. Windows is the only OS that never jumped on that bus, reserving that compatibility only for the top $$$ server products, so everybody assumes it’s a hard limit. Linux, OSX and BSD know better…
Linux and most of the BSD support PAE as well, you know… You just have to turn it on.
I’ve been waiting to move my main Windows system over to 64bit for ages now, but I rely on CheckPoint by NGX, who only provide a 32bit install (it has a driver).
I therefore have to do all my 64bit development and testing in a virtual machine, which isn’t always ideal.
This is a vendor problem, not a Microsoft problem. If vendors would get their act together, it wouldn’t be so difficult and we could have all moved a long time ago.
Hmmmm. lets rephrase that a bit:
This is a vendor problem, not a Linux problem. If vendors would get their act together, it wouldn’t be so difficult and we could have all moved a long time ago.
Think of the horizon on that comment.
Why? His problem has to do with a Windows driver, not a Linux driver. The problems with memory mapped drivers and Windows is caused by hardware manufacturers(yeah nvidia, I’m talking about you), not Windows. Therefore the horizon on your comment is narrow and pretty undefined.
I was extrapolating… I guess a paradigm shift without a clutch is difficult for some people.
I was trying to be glib. I guess I should *REALLY* have been blunt and very straight forward. Explaining a joke and/or ideal just ruins it, in my opinion. Come on stop thinking this way.
Here it is the way I guess I should have typed it:
This is a HARDWARE vendor problem, not a Linux problem. If vendors would get their act together, it wouldn’t be so difficult and we could have all moved ^to Linux^ a long time ago.
Okay NOW… think of the horizon on that comment.
Drivers in general are not an issue, but access to the hardware specs or source code (and the “binary blobs decompiled”) typically *IS* when it comes to Linux and *BSDs.
I guess you have to be very short sighted to not see where I was going.
No, because where you were going still has nothing to do with the parent comment. His problem IS with a hardware (or software) vendor, and the OS in question is Windows.
Linux or FreeBSD has nothing to do with the problem, as the product may not be available for those platforms. Linux is not for everyone, neither is Windows, and neither is BSD.
You’d also have to be pretty shortsighted to not see that
Sorry to get into other people’s argument, but I think the point he’s trying to make is that whenever Linux has hardware compatibility problems, most people blame Linux (instead of the hardware vendors), but when it’s Windows the one with hardware compatibility problems, people tend to blame the hardware vendors.
For the record, Check Point produces software. And they support linux, as well as commercial unix. They even even use a rebranded RHEL as their own platform for deployment of server apps, and as a platform for their own security appliances. But they don’t support 64-bit desktop Windows. Why? Because the vast majority of commercial organizations aren’t running 64-bit Windows on the desktop, and that’s CP’s market.
Not to take away from your point, you just picked the wrong example. Check Point doesn’t sell hardware for PCs, and they embraced linux as a platform long before many others did. They just haven’t invested resources in producing and supporting a 64-bit Windows VPN/IPS client for a non-existent market. This has nothing to do with hardware vendors not opening their cookbooks to allow OSS drivers.
FTA:
I’m somewhat confused by this statement. From the Wikipedia page about x86-64 http://en.wikipedia.org/wiki/X86-64 it seems to me that 64 bit features of x86-64 CPU’s can only be used in Long mode, and in order to get into Long mode, there must be a 64-bit kernel. If a kernel “runs in 32bit” (as stated in the article), the CPU should run in Legacy mode, making the 64bit features unavailable.
Also, the meaning of the parenthesis “(the kernel in Snow Leopard is supposed to be 64bit)” is not clear to me. Does it mean that since the Snow Leopard release of MacOS X the kernel does not “run in 32 bit” anymore, thus making the aforementioned technique (that I find dubious anyway) of running 32 bit drivers impossible?
Clarification would be appreciated! 🙂
Well, I remembered the info from Ars Technica [1], and confirmed it in several other places [2] [3]. Sadly, Amit Singh’s websites are offline at the moment, so more detailed explanations will have to wait.
[1] http://arstechnica.com/reviews/os/mac-os-x-10-5.ars/6
[2] http://en.wikipedia.org/wiki/64_bit#Software_availability
[3] http://en.wikipedia.org/wiki/X86-64#Mac_OS_X
On my Core 2 MacbookPro the kernel seems to run in 32bit mode.
MacBook:/ gousiosg$ file /mach_kernel
/mach_kernel: Mach-O universal binary with 2 architectures
/mach_kernel (for architecture i386): Mach-O executable i386
/mach_kernel (for architecture ppc): Mach-O executable ppc
On Mac OS X right now it uses PAE to access/detect memory above 4GB, however, there is a performance penalty associated with it.
Snow leopard will be the first ‘pure’ 64bit version of MacOS X – it’ll be interesting to see how many vendors jump onboard and start compiling their stuff to take advantage of the new features.
Thanks, you’re indeed right. Especially your second link provides some insight. The more I’m thinking about it, I find the Wikipedia page about x86-64 somewhat contradictory:
which makes it sound as if only 64 bit operating systems can use the 64 bit features of the CPU (and which, in retrospect, also raises the question what constitutes as a “64-bit operating system”).
Later, in the Mac OS X section, they claim that Mac OS X v10.5 uses the 64 bit capabilities of the CPU and also that “The kernel is 32-bit.”
Hm. I guess I should put my thoughts on the Wikipedia x86-64 Talk page…
It is achievable via the same method as process switching. When a 64-bit process’s thread[s] is[are] going to get some CPU, the kernel switches the state ( as always ), but sets the CPU into ‘long’ mode, for 64-bit operation. It switches back upon the end of 64-bit execution. This means the insertion of only a little bit of data to identify a process as being 64-bit, and to naturally identify each of its threads as requiring 64-bit mode. Rather simple, really.
The kernel simply provides a few short commands to the CPU before a 64-bit thread is executed, and a few more after-word. I long expected this would be possible, and maybe even useful – depending on the time required to make the 32 to 64-bit switch ( which obviously isn’t very long – maybe a clock cycle or two ).
–The loon
OS X uses a microkernel architecture, so it’s entirely possible for parts of it to run in 64-bit and other parts to run in 32-bit mode.
Think of it in a similar way to using nspluginwrapper to run 32-bit Java/Flash in a 64-bit Firefox. The drivers run as separate 32-bit processes (with kernel privileges) that are managed by a 64-bit kernel scheduler.
(That is what I’m guessing is going on, anyway.)
If the kernel of OS X is considered a microkernel it might be one of the fattest around. More like a twin-kernel, counting its BSD-based portions:
http://en.wikipedia.org/wiki/XNU
http://en.wikipedia.org/wiki/Hybrid_kernel
I don’t know enought about it to comment on whether mixing of 32-bit and 64-bit code is possible in the OS X kernel. Some I/O Kit drivers can run in user space, so those could possibly be of the opposite bitness, I guess.
While I am not sure if Apple ever really revealed the details regarding how 32-bit and 64-bit codes coexist in OSX (prior to 10.5), based on the AMD64/Intel 64 architecture specification it is possible to mix 32-bit and 64-bit codes in kernel(s). In fact, that’s what long mode (which comprises of two submodes: 64-bit and compatibility) is about. So unless Apple set out to do crazy/slow things, such as switching between long mode and legacy mode (which will be incredibly slow due to overhead) or running everything under ring 0 (unsafe), most likely the kernel modules all run under long mode and just that Apple elect to keep the driver management modules on 32-bit side to maintain compatibility. This is quite a feast and will require substantial assembly programming, regardless of monolithic or modular kernels. Memory management would have to use 64-bit paging paradigm (involving PML4) and the overhead will be similar or worse than a 32-bit OS using PAE.
Back when I bought a AMD3000+ some 4 or 5 years back I switched to 64bits.
Because I use gentoo linux every packages is compiled 64 bits anyway (apart from the very few binary packages I installed).
Agreed. Gentoo does a great job.
My AMD3200+ is running 64-bit Gentoo, and even the few applications (namely Firefox) that had to be 32-bit (namely due to plugins, e.g. Flash) are now converted over. (Yeah for 64-bit Flash!)
When we got my wife her laptop last summer, I opted for Vista64 just to make the conversion now as opposed to waiting. (She wanted Windows; no talking her out of it.) I also ran the Vista 64 RC on the AMD system for a while, and it ran great. All the hardware worked fine and the 64-bit drivers for the SB Audigy 2ZS Platinum from Creative worked too.
I also run some 32-bit systems, but pretty much only where the processor doesn’t support 64-bit. The F/OSS software is certainly there.
That said, the article is a little off. While most F/OSS software is easily converted by a simple recompile, some isn’t. OpenSSL had a while where it wouldn’t run in 64-bit mode very well. (They’ve long since fixed that.) It’s namely math intensive apps that are problematic since they typically optimizations for the bit-sizes they support, and need to be converted to be safe when using new types – you can’t convert a 32-bit UINT to a 64-bit UINT without ensuring the application will continue on without a problem – for example, if you use 0xFFFFFFFF instead of (-1) you’ll have a problem since 64-bit (-1) is 0xFFFFFFFFFFFFFFFF instead of 0xFFFFFFFF; while pretty safe, in a loop, it would be hell on a return value where the function used (-1) in the return instead of 0xFFFFFFFF. Any how…point is there really is a bit more work than just a simple re-compile, though the differences are typically small.
[EDIT: Adding the below]
That’s not to say 64-bit Windows conversion is without its problems. My wife’s Vista laptop doesn’t like installing our HP-950C printer since there is no 64-bit native driver (in the list at least), HP’s website says it’s “in the system”, and the printer is hosted via CUPS over the network.
Also, my father-in-law recently got a new laptop with Vista 64; but has since sold it to my brother-in-law since the Dragon Dictation software would not work on it – apparently it doesn’t support 64-bit Vista, and only 32-bit Vista.
So yes, Microsoft has a ways to go in getting the Windows-centric software industry behind 64-bit conversion. They do a decent job themselves, but there’s still work to be done. The F/OSS camp is a little better off since conversion is quicker and apps that need fixing can be easily fixed by the distributions (at least) and have patches submitted back easily enough if the maintainers don’t get there first. If I’m not mistaken Apple did the conversion to 64-bit PPC, then reversed to 32-bit Intel, and is now moving to 64-bit again but on Intel procs.
Edited 2009-01-21 20:33 UTC
I’ve been running 64-bit Vista since last year and it has been working just fine. In my opinion, unless you have something that won’t work properly on x64, I see no reason not to use it.
I run Linux primarily. When I bought my computer (last december) I had chosen to go with 8GB of RAM, leaving me no real option. It had to be 64 Bit.
I currently run Ubuntu 8.10 64 bit. It’s perfectly stable. The only trouble I have is with Eclipse. It uses SWT and that is native. They have a 64 bit version but it crashes a lot; killing my Eclipse (and my unsaved work). To fix the problem I run Eclipse 32 bit, running on a 32 bit Java (but everything I run within Eclipse uses the 64 bit Java).
For games I still fall back to my 32 bit Windows XP that only uses 3.5 GB of my RAM but works fine.
Maybe use openjdk and/or compile eclipse your self ? It’s open source after all. 🙂
I use eclipse on Debian sid (amd64) and it has been running without problems with sun-java6 and openjdk. I suggest that you install the debug symbols for swt and send stacktraces to the ubuntu maintainers when it crashes. It might be a bug that only triggers under very specific conditions (plugins etc).
I’ve worked on Windows Vista 64bit since some time (well, like since when vista came out) now and there are no flaws.
It’s like swearing in church, but same here… been running Windows Vista (SP1) 64-bit for a bunch of months (must’ve been 5 or 6) without any problems. Not even a BSOD once.
Weirdness!
😉
-zsejk
Just switch to GNU/Linux – seem slike right time for you. You can still keep your 32-bit windows in VirtualBox.
I do it and it fills all my needs… well almost – except for gaming, but that’ll change soon when DirectX support will come (after we have finally OpenGL acceleration in VM now!).
It is a pity that Windows users still need to run 32bit versions, years after modern hardware switched to 64bit.
32bits OSes are limited to 4GB per application
Since the Pentium Pro (1995), x86 can have at least 36bits (=64GB) of physical address space.
I don’t know whether current OSes use it though.
Client (non-Server) releases of Windows force physical addresses to be in the 4 GB range because of historical problems in 32-bit drivers due to storing physical addresses in 32-bit variables. Server drivers tend to be better-tested in these circumstances, so high addresses are allowed on those.
Also, Windows processes get 2GB each on 32-bit systems. A 32-bit process on a 64-bit system can get 4GB if its executable is marked as large address aware (basically a promise by the app writer to the OS that he/she is not using the top bit of pointers for nefarious purposes).
There is support for it in the Linux kernel, but usually distributors do not enable it. You will have to yum/apt-get install a special kernel.
… Linux and maybe not FreeBSD either.
For OS X and Solaris yes.
…albeit only 2nd-hand experience (thankfully).
The only two 64-bit computers I’ve encountered were both being cursed-at by their owners, both times due to purchases of (new) hardware that had no 64-bit driver available. I guess one of these was a positive experience actually – for me, as I was able to flog the guy my spare wifi card having found a (beta) 64bit driver for it
I probably have a new purchase coming up in the next year, but going 64bit hasn’t entered my mind for a second. I think 32bit will continue to be the norm, or at least a grudgingly-supported legacy, for probably the lifetime of my next machine, at least.
(Then again, MS have already shown their willingness to give lazy IHVs a kick up the arse with the Vista driver model, so I may be proved wrong!)
Is it not also a question of Software culture?
When Apple release Snow Leopard, all drivers are going to have to be recompiled. Not so bad for Apple who have less drivers to deal with, but third parties still exist.
The difference though between OS X third parties and Windows third parties is that on the whole, OS X software authors tend to keep up to date, adopt latest technologies, update their apps more often and I suspect will deal with the 64-Bit transition with almost no bumps, no more than maybe the few weeks after Leopard was released and I had to wait for a couple of updates here and there to get everything work. Practically nothing, compared to what an average user would experience switching to a 64-bit only Windows system.
_If_ 64-Bit brings the benefits that the direction of the market demands, then Linux and Mac users are not going to be sweating, but Windows is so deeply entrenched in legacy, and so dependent on categorically lazy and unreliable third parties that I simply cannot imagine Windows going 64-Bit by default on consumer bought machines for the masses. Which is a great loss, because machines with 64-bit chips have been shipping for years now, and there are user groups which are going to eventually give up on Windows when they can’t be arsed to install a 64-Bit OS from scratch to use 64-Bit Photoshop and just use a Mac instead which will work straight out of the box.
That’s just my thoughts though, all very wrong I’m sure.
BTW: I was shocked to find out last week that a 64-Bit version of Vista Ultimate uses 1.6 GB of RAM on boot, by default.
Edited 2009-01-21 13:36 UTC
Pretty much agreed, Kroc.
Of course, Microsoft basically nurtured such behaviour for a long time, and now when the company is starting to fix the errors of the past, as well as changing its ways, it’s still bitten in the ass by short-sighted design decisions back in the day.
Oh well. My machine runs just fine on 32bit. It’s not like I need 8GB of RAM at this point.
Edited 2009-01-21 13:39 UTC
Are you aware of what exactly SuperFetch does?
I am sure he is aware but been using Windows 7 and on boot I have about 400 MB of RAM in use…1.6 gb sounds like an awful lot. Even Linux Ubuntu 8.10 that I am using once in a while does not take up that much ram at boot. And it is probably more efficient at using resources than any OS I know! I dont want to get into a Vista vs other OSes debate, but there must be something wrong with that OS fundamentally.
Ubuntu does nothing like SuperFetch, and Windows 7 always has used the exact same amount of resources as Windows Vista on a clean boot, at least for me. The SuperFetch technology has not really changed as far as I know. What has changed, among other things, is Microsoft has made numerous tweaks to make the system more responsive. Unused RAM is wasted RAM, and really all it takes is some heavy lifting (large app loading/usage, or large games with hefty load times) to see the clear advantages offered by using Vista and later Windows releases.
The system itself does have a smaller footprint, yes, that is what allows so many people to have success running Windows 7 on netbooks of all things. However, Windows 7 will still cache your most-used data and programs into unused memory just like Vista. When you open Task Manager there is the Total, Cached, Available and Free fields for memory. This is just like free -m on Linux. On a fresh boot here on Windows 7 x64 (And Vista, too) most of my memory is sucked up by the same applications and games that I always load up anyways. This is a good thing that makes my computing experience smoother. I don’t want all of my memory to go unused when it could be exploited to make everyday tasks quicker.
I didn’t explain it as best I could but I hope I did a decent enough job, but it comes down to this: I wish Ubuntu did the same kind of stuff that SuperFetch does on Vista. People have been in fact working on implementing equivalent functionality see this here http://www.osnews.com/story/19385/Preload_the_Linux_SuperFetch_PreB… on this very site.
Edited 2009-01-21 14:09 UTC
Bingo.
I am so happy we could find a point to agree on Thom.
Meanwhile, I don’t want my system to be tied up with unnecessary disk I/O at boot and, possibly worse, at random times after… both slowing what I’m doing down, and causing unnecessary use of my drive…
It never fails to amaze me when looking at the hard drive indicator light on a Vista machine (even one that’s never seen the Internet): the damn thing is almost never off for even a full second! Makes me wonder what other crap *besides* SuperFetch is running by default…
I like the classic style. Run a program once… it’s in memory (and on modern processors, takes barely any time to do it). Close it and re-open it, it’s even faster, because it remained partially in memory. Last thing I want is the OS to try and load a game or an office suite in memory when I have no intention of using it. I’m no fan of Microsoft’s “smart” algorithms designed to “learn” or “figure out” what you’re going to do with your computer because, quite simply, your use can change on a dime and half the time Windows’ guesses are wrong anyway. While wasting resources at the same time. It’s “Personalized Menus” all over again… [cringe]
My suggestion then if you prefer the XP style prefetching is to just turn Superfetch off and forget about it. Now that was easy wasn’t it?
Yes, which is why plowing through the system services, checking procsses and Googling, I was shocked to find that something *wasn’t* wrong with the system. I disabled SuperFetch and it made no noticeable impact to RAM at all. I know 64-Bit programs use more RAM, but I didn’t expect by that much.
It has nothing to do with 64 bit application memory usage and everything to do with caching your most-used files and applications into memory for quick access. Disabling SuperFetch is a bad idea because really all it does is reset you to XP-levels of memory management. Furthermore, if your memory usage didn’t change at all when you disabled SuperFetch (and restarted your system I hope, to make sure the change took place) then I’d say you have problems with your particular setup. Vista does not eat 1.6GB of memory on a clean boot for the base system and to claim it does is completely false. What does it take to get people to understand that they are complaining about aggressive caching meant to make usage patterns quicker? Stop looking at your memory usage and thinking it’s the same sort of meter it used to be, because operating systems simply work differently these days. Unused resources are wasted resources.
It wasn’t my machine, just a customer’s, the first 64-Bit Vista system I’d seen. I was only quoting a figure — *not* how that figure is construed.
Of all the Vista systems I’ve been on, a clean 32-Bit Vista system uses about 600 MB of RAM on boot. Caching or not, that’s the figure on the display on boot, Okay. On this 64-Bit system, clean, that number was 1.6 GB. Caching or not, it is a different number. That’s all I was stating.
Being on a Mac, I understand different methods of memory management and that Vista uses management more akin to OS X than XP’s hands-off approach.
You should avoiding using anecdotal evidence, especially anecdotal evidence that comes from “a customer’s” PC. Vista x64 memory usage is roughly the same as Vista x86 memory usage on my particular box as well as anyone’s box that I know of. The more memory your system has available to it the more memory Vista and later will utilize for caching of what you use most.
Furthermore, you are making it sound like you are basing your statements on clean as in fresh systems/installs. You don’t really expect a clean system to preload data in anticipation of usage patterns when there are no usage patterns defined?
Well then stop giving the impression to users that memory is being used like that, why give inaccurate data from the task manager and then claim the user misinterprates it.
Yes I know about New versions of Windows cache memory and all of it for that matter, but at the same time your using diskIO to do this. Whats the point in faster startup time when prefetch uses a rather heavy diskIO usage to do it.
There is a difference between them telling you how it works and how it does actually work on different setups, XP is faster then Vista without all this new memory management so it makes it a rather null point.
On topic, I use pure 64bit in archlinux, 4Gb ram(thanks adobe for the 64bit flash plugin) so I dont need any libs32, I’m glad I can.
Edited 2009-01-21 15:17 UTC
There is no inaccurate data being given from task manager… it is not Microsoft’s fault people do not educate themselves as to what the properties mean. Cached is very straightforward and using the free command on Linux boxes will give you roughly the same information.
No, it doesn’t use a bunch of disk IO. Stuff put into the SuperFetch cache is meant to reduce disk IO by preloading those items you will be loading anyways. I think you are confusing SuperFetch with the Indexer service. SuperFetch really only causes a lot of disk IO when you log in to initialize and populate the cache. I personally haven’t been on a system where this activity is noticed. More info here: http://www.codinghorror.com/blog/archives/000688.html
No, XP is not faster than Vista without all the new memory management and other changes and tweaks under the hood. Benchmarks show Vista and 7 as being faster under a variety of workloads, and XP still being faster at others. This is kind of how things work with even various Linux kernels and distros.
Educated themselves eh, Considering Windows is for the average joe I’m sure Microsoft would make it more readable to them, memory used 800Mb, cached 200mb. Oh wait, really it’s not like that, they are supposed to understand the complex workings of memory management.
No, when Vista caches memory the disk thrashes until it’s cached all of your free memory, thats not indexing, it does it everytime you boot into it.
Dude, first off the way Task Manager represents memory usage as a summary is no different than any other equivalent program for any other operating system. Total/Cached/Available/Free is not exactly ambiguous and most people understand what the meaning is. The fact that you feel so dead-set on railing against it shows how absolutely silly you are arguing for something so cosmetic. It is informative, it tells what’s going on, how else are they supposed to represent this, by dumbing it down by removing the Cached or Available?
Second, you are completely wrong to say that Vista thrashes the disk when you log in as it attempts to utilize the SuperFetch cache. You are basically complaining about what is simply an extension to the Prefetcher that comes with XP itself. You don’t seem to understand that SuperFetch is a net gain because your apps open faster and your system is more responsive to what you do with it. Did you even read the link I provided? You also are amazingly ignorant to the fact that Vista and subsequent Windows versions are better at I/O in general. Am I talking to a wall here who loves to complain but can’t actually post something of substance?
In summary: SuperFetch preloads your apps into unused memory, and does so in a gracious and gradual manner that doesn’t thrash your disk around and make your system unusable for a long period of time after boot. From the time that my desktop and taskbar appear I am able to start launching my applications and use my computer with minimal delay because it’s already anticipated that I am going to do what I always do, open my few apps and start using the system. This whole exercise has been about as painful as reading a Roblimo review of Windows.
Hey, that’s how Microsoft does everything else, is it not? See, Windows XP’s defrag tool vs. Vista’s newer equivalent for just one (sickening) example… there are *many* others.
Another, much older example would be the Windows 95/98 installations vs. Windows XP’s (and newer). Where the hell did all the options (selection of programs to install, for one) go?!
Edited 2009-01-21 19:48 UTC
I don’t remember what the Vista defragmenter looks like but if it’s anything like the 7 defragmenter it is not exactly dumbed down past the point of usefulness (Though I always use JkDefrag, myself). Screenie here: http://img206.imageshack.us/my.php?image=defragmenterza6.png
Not only is it a good thing to ask less during an install (Have you ever had to do a large amount of them?) but it’s just plain smarter and ties in with the fact that starting with Vista Microsoft began using image-based installs a la Ubuntu. Besides, what are you missing out on by not having to deal with so many prompts during the install process? I find installing XP to be quite an unpleasant ordeal compared to a set-and-forget Vista install.
You couldn’t have come across as any more biased unless you were to toss “Microsoft” and “Windoze” in your post.
Windows XP’s defragmenter allows you to analyze your drive, and gives you some quick and basic fragmentation stats, along with a before-and-after preview of your drive. At a glance, you can determine whether you want/need to defragment or not. Vista’s defragmenter? Well, you get a window with no statistics, no way to analyze and determine whether it’s needed. You just hope you actually do need to, because it’s gonna take a long time…
Whoop-de-do, so you can schedule regular defrag runs in Vista, just one more thing for it to miss (if the computer is off) or get in the way if you’re actually busy doing something on the computer. That’s a joke and a serious step down. I always used PerfectDisk in WinXP, but at least XP’s built-in defragmenter was adequate if nothing better was installed.
I would most definitely call that “dumbed down past the point of usefulness.” Though, opposite of you, I can’t confirm right now whether Win7’s defragger is the same as Vista’s, but I have a strong feeling it is.
Edited 2009-01-21 21:11 UTC
Dude, you end off your post saying you think the 7 defragger is the same as the Vista defragger, when in my own post I linked to a screenshot I took showing what the defragmenter looks like in Windows 7. Did you bother to read through what I had said?
I know how the XP defragger works, it’s a very stripped down version of O&O Defrag that I never really thought highly of. The Vista defragger was a mistake I completely agree with you on that, especially their idea of running on a schedule late at night. Things are much more sane with Windows 7.
At any rate, if all you can offer for how Microsoft does nothing but dumb things down is the install process and the defragger, I don’t think you’ve got a whole lot of a case.
Yes, and your screenshot did not work.
I don’t feel like spending a week debating, so I just left it at 1 big one (defragger) and 1 admittedly minor one (lack of install options). I’m already tired of debating, so I’m out of this thread.
Edit: It apparently was NoScript, blocking some third-party script from running on that site, preventing the Win7 defragger image from displaying. After having seen it, yes, it is a step up from Vista (and about on par with XP). Good, one down, now they’ve got a few dozen other things (IMO) to sort out. Anyway, I’m out.
Edited 2009-01-21 21:35 UTC
It does work, I would say give it a second to load and see how the interface is but I guess an intelligent discussion is asking a bit much of you, fine. Just stopping by to throw in “Microsoft BAD!” and then leave when someone tries to address your points, apparently.
So you think me and others are making up some story about Vista thrashing your HDD to cache free memory?(we are Windows haters we must be)
You know what, I think your one of them people who read docs or tech pages rather than real usage on peoples machines. I can reproduce this on my wifes machine ,she can’t open something up for a while because it’s busy prefetching, you really think it’s some kind of fabrication I made up off the top of my head or do your docs say in theory this doesn’t/shouldn’t happen?
Quit being an airhead and bandying about this idea that I am not basing my positions in solid ground dude, I am working off my experiences as well as the experiences of my friends and family (I am the go-to computer guy for a lot of people I know and I sell my services as well). I don’t think you know precisely what is causing the problem of thrashing and are mis-attributing it to SuperFetch when the much more likely cause is the Indexing service or a driver/hardware issue (Think low specs). The simple fact of the matter is that the issues you describe are not standard operating procedure for Vista and beyond, and your experiences are not anywhere near the same as mine or people I know. I would assume that you are at least using SP1. It is simply not normal behaviour what you are describing, and I would be upset at my hardware if it was up to spec and Vista was behaving like that.
EDIT just to add this — I wouldn’t be surprised to find that there was a lot of crap loading on start on that box if going from login to usage takes so long. I’ve seen XP boxes that took at least 3 minutes just to stop thrashing and let me work. See how wonderful anecdotal evidence is?
Edited 2009-01-21 20:24 UTC
I’m sorry but your the one who seems to think we something wrong and have indexing or a ton of apps lauching on startup, where is your effidence that is happening?
HDD thrashing starts on startup, looking at the task manager the free memory starts to drop and goes into cache. The HDD thrashing stops when all free memory is cached, it says 0 free “mymemory” cached, I guess that black magic or conicidence then(yes I know this means it didn’t take all my memory up and I have non).
You sound like something from a support line saying samething from a script or your docs and it can’t happen anyway different.
I don’t have any evidence that this is what the cause is, I am simply stating that the system is not supposed to behave like that and doesn’t on any system I’ve personally interacted with and offering possible culprits. You seem more than happy to simply blame SuperFetch without providing anything to back up your claim. I on the other hand am merely suggesting that your problem most likely lies elsewhere.
I don’t know what I’m supposed to say here other than yeah? Your HDD thrashing sounds like a hardware/software issue completely unrelated to how Vista and specifically SuperFetch function. Again, no systems I use or see behave like this unless their hardware is not up to snuff, like the XP box of my sister and her husband thats running on ~190MB of memory. That thing is swap city.
So because I’m taking a logical approach to evaluating your statements and trying to explain how the technology in question here actually functions I am coming off as a scripted bot? If I sound like I’m reading off a script that I can’t divert from then you sound like someone who really can’t be bothered to actually post anything factual. You have not posted a single fact that can be evaluated, your posts are just a bunch of anecdotal evidence completely intent on assigning blame to a Windows Vista subsystem that does not behave the way you describe in any environment that I’ve encountered. What are you trying to do run Vista on a box with 512MB of memory? The x64 version on a box with 1GB of memory? It sounds like your system is quite strained in a way that I haven’t seen before on a well-managed system that doesn’t have a ton of stuff loading on startup.
I’m surprised you haven’t gone all the way and just called me a shill because that’s what it sounds like you want to do. I can’t possibly be making the assertions I’m making because *gasp* it’s been my experience, right? We are both relating our experiences, and I’m just trying to defend a piece of technology that I haven’t seen as particularly harmful in most situations.
Excellent post, Oddfox.
We are talking about modern OSe, everyone should thing that way.
By the way, people are still monitoring resource usages based on that old concept. It should be changed, too.
If you’re running Linux I would recommend using a 64-bit version. Obviously all drivers are portable so there really isn’t a driver issue like there can be with Windows 64. I haven’t run into a 32-bit only open source application in a long time either. Even proprietary applications tend to have 64-bit versions for Linux. If you are stuck with a 32-bit application it will run without issue. 64-bit on Windows still needs a lot of polish. It’s going to be a difficult road because Microsoft needs to depend on third parties to a much larger degree than any Linux distro.
Every operating system works this way, not just Windows.
The problem you mention is specifically a Google Chrome issue on Windows 7 x64, which I might remind people is in beta. Not only is the onus on Google to make sure their programs are written correctly, but this particular issue is easily worked around (I should know, I had to do so. It’s as easy as tacking “–in-process-plugins” to the end of the Target field).
Talking about Mac OS X fat binaries, those are going to be gone soon with the advent of Snow Leopard with the move to 64-bit exclusively. That’s one of the big reasons the system is so much leaner and the footprint is smaller when talking about Snow Leopard. Apple is going to be hoping that developers move on the same way they have.
I moved to 64 bit when I got my first 64 bit capable processor (Athlon 64 3000+) way back in the day and I haven’t looked back since. The only thing that doesn’t work is my old Genius webcam (Which happens to work quite nicely in Linux, 32 or 64 bit). The difference between 32 bit and 64 bit is nominal in most cases, and when I can use specially compiled software that actually takes advantage of these extra capabilities (audio/video transcoding or encoding) the advantages are clear and pronounced. I have no interest in using a legacy 32 bit system.
Fat binaries will not be going away, Apple have just simply modified the OS Installer to strip unneeded binaries from the system during installation.
The same effective slimmer size can be achieved in Leoprd by running a binary thinner like XSlimmer or monolingual.
Snow Leopard will also be 32-bit as well (to work on Feb-06 32-bit CoreDuo Macs), the difference is that on a 64-bit machine, a 64-bit Kernel and drivers will be used.
I guess I misunderstood all of the talk about Snow Leopard from when I was following the news about it more aggressively back when 10.5.3 was new stuff. Can’t say I’m not disappointed that a company with little to lose by pushing things forward wouldn’t take the chance to do so. Xslimmer does sound pretty neat though, wish I had a system to run it on.
Xslimmer is horrible; if I had a dollar for every person who has buggered up their system because of it – I’d be able to buy a McDonalds franchise.
The illusion being sold it uses less memory (which is a crock), loads faster (another crock). The only semi-truth is uses less hard disk space; sorry, given that one can buy a big ass drive, the risk of screwing up a system simply to save a few megabytes is pretty pitiful.
I didn’t look into it a whole lot beyond a quick glance of the homepage and quite frankly it does sound more dangerous than it’s worth for the vast majority of people. Certainly the memory usage and loading times should be unchanged, and I don’t know anyone who is so hurting for space that a tool like this is very vital. I did notice on the homepage for Xslimmer that they maintain a blacklist of programs that are not modified and that threw up a very bright red flag for me. My remark was moreso about satisfying an unending curiosity about things than actually trying to slim down.
I do agree completely that tools like these are generally best left alone by the vast majority of users, especially when the frontpage of the project says the trial lasts until you free up 50MB. Going by their own data presented, that’s not enough to cover even shrinking down Garage Band. One of the other things I really don’t like about the Mac community is every other little utility of questionable re-use value has a price tag on it if you want to get something done, with trial periods that are completely unreasonable. Then again, I’m spoiled and like to run FOSS as much as possible. And yes, I’m so cheap that 12.95 is expensive in my eyes for a small utility.
The best example is Office 2008 updates not working because people have applied XSlimmer to their installation of Office 2008. Now they have a slimmed down installation where the updates won’t work properly because they’re based on a delta and not a complete replacement of files (in some cases) and expect a non-tampered installation.
For me, I know a bit about computers but I tend to believe that engineers at Apple, Microsoft, Sun and so forth know a little more than I do – if I knew more than them I’d be working for the said organisation 🙂
“So my question to you is: have you ever faced this choice? What were your arguments to go one way or the other?”
No, because I still run a 2001 machine.
My next system will, no doubt, be 64-bit (whenever I get it…). In order to make the most of it and make it last further into the future, it’ll most definitely have more RAM than the upper boundaries of 32-bit support… I’m thinking maybe 8, 10, 12GB. Some Linux variant is likely going in the drive and getting installed upon first boot (not impressed at all with Vista, and 7 is not much of an improvement over it IMO…).
The few 64-bit machines I’ve used had 32-bit versions of Vista, but I have installed (using Wubi) a 64-bit version of Ubuntu on two of them. What didn’t work could have easily been predicted: their wireless network cards and Flash. No surprise there, and the wireless cards didn’t work in 32-bit either. Other than the Flash mostly, it felt just like the 32-bit versions I’m used to. Have yet to see a 64-bit version of Windows, though.
Have to agree with the above point. I can wait for an application to launch the first time i come to it. Otherwise i want my ram ready to launch what i decide to do. All this predictive stuff really annoys me. Personalized menus are the first thing that i switch off when i install Windows and/or Office.
Reminds me of looking at a website once about a story involving a Chicken (referenced as a cock – a male chicken). What sort of advert do you think i got served with? Yup, Viagra. This intelligent and predictive stuff is only slightly more intelligent than the people who come up with these ideas in the first place.
Strange…
At work we migrated some java servers to 64bit linux immediately summer 2003. We bought two identical dual opteron boxes, put 32bit fedora on one and 64bit fedora on the other. That 32bit machine was the last one purchased that had a 32bit OS on it (excepting windows of course. And I personally haven’t used anything but 64bit os’s on 64bit hardware ever since (excepting windows which I don’t run).
So in my view transition should have been active 5 years ago and done 3 years ago.
Flash has worked in linux for a long time using a wrapper to invoke the 32bit flash version. Thankfully that was fixed a long time ago.
Edited 2009-01-21 16:25 UTC
Just to add my tuppence: I bought a 64-Bit laptop from HP 3 years ago and despite promises from HP near the time, they still havn’t released any 64-bit drivers for it (although I suspect most of the hardware is supported in Vista 64 – just a shame the crap ATI 200M graphics chip causes it to get a ‘vista score’ of 3).
Works fine in under Linux in 64-bit now that we have opensource broadcom drivers – except for the crappy ATI drivers ofcourse (but that’s the same in 32-bit).
I’ve been using Vista64 since SP1 came out. No problems so far, but then again I am a developer, not a joe-blow user.
I run Vista 64bit for quite a while now and it is running fine. Had no problems with any applications and all of my hardware is supported out of the box or via windows update (even my well aged analog tv card works just fine). The ATI driver for x64 seems a bit more stable, had problems with the 32bit vista driver from ati
Several years ago my employer purchased a high-end PC to run several numerically and graphically intensive applications. We bought both 32- & 64-bit Windows XP, but never installed the 64-bit version because, [wait for it…] the node-locked license manager for these applications has never been upgraded to 64-bit. Inexcusable!
What’s the point of selling 64-bit CPUs if you don’t encourage the ecosystem to embrace it? Vista should have been released as 64-bit only, which would have forced third-party hardware drivers to (finally) step up to 64-bit. Microsoft could do us all a favor and make Windows 7 64-bit only.
Personally, I’ve been running AMD 64 Ubuntu since 6.06. Yes, the first year or more was painful and not for the faint of heart, but 64-bit web media support (e.g. Flash & Java) has improved. But, I also dual-boot 32-bit WinXP Pro for gaming. I’ve yet to figure out how to access my old Mad Catz Panther XL joystick in Linux, and some of the game DRM schemes aren’t supported by the virtual machines I’ve tried (grrr).
When you have Windows only applications with large memory requirements it’s a waste of breath to argue the point and with 8GB of RAM makes its a no-brainer. I’ve been running x64 Vista on a 4 year old Dell Precision Workstation for close to a year with not issues, soundcard even works! I’ve even got corporate deployments of x64 XP – no issues. I for one am looking forward to Windows 7. In our test environment old hardware with Win7 performs better than new hardware with Vista. I think all the MAC miss information is working. In a modern office environment with quality software and hardware people are worked up over nothing.
Using XP 64bit and Ubuntu 64bit. No problems at all. Well, maybe flash player for linux could be a bit improved, but no major issues.
maybe you are not even asked what’s your choice.
The availability of cheap RAM answers this question for you.
Dell and others will only drop machines with 4 + 8 GB on the market and most people will stick to the 64 bit pre-installed, because you know, it doesn’t make sense to have 8 GB and only use 2 or 3. So eventually ‘yes’, you will move.
While 64-bit mode affords the use of a lot more RAM without ugly hacks like PAE, the main benefit is that it expands the instruction set to include twice as many general purpose registers. This is a huge win for the register starved x86.
Doubling the width of memory addresses isn’t always that great. You’ll wind up putting more pressure on the cache since now your structures have been bloated a bit. Sure you can address more memory now, but at the same time, you might find yourself taking cache misses more often.
Actually, in long mode (IA-32e mode in Intel jargon) paging scheme is further expanded upon PAE, consisting of up to 4 levels of paging data structure (PAE under 32-bit mode has 3 levels). PAE has to be enabled prior to entering long mode. So it’s a moot point. Extra registers certainly help, and Win32 API call conventions for 64-bit system are changed to utilize more registers accordingly. The main advantage is still the ability to address more physical memory and also allowing programs to use a larger virtual memory space.
A few years ago I switched from 32bit to 64bit linux and after using it for a year or 2 I found that there was absolutely no performance benefit (for what i use it for – just general stuff) and most of the time i just had problems – things like the flash plugin etc.
There were workarounds but I eventually got sick of having to do more work and just switched back to 32bit. I’ve been using 32bit ever since without any trouble. My PC runs fast (even though its 3 years old) and dont run into any trouble due to it being 32 bit.
One thing all the 64bit users may not realise is that 64 bits means applications will USE more memory due to the size of all your integers etc and memory addresses so yes you can map more memory with 64bit but you’ll also NEED more memory too. And as mentioned in above posts, you then effectively get half(?) your cpu cache size etc.
I still haven’t heard of 64bit being that much (if at all) faster than 32bit so I’ll be still staying with 32bit for the forseeable future.
With all the grammar flaming and general douche-baggery aside, I have a Slamd64 workstation and a Vista x64 workstation purring along just fine with zero problems. I say go for it if your capable of doing research on what works and what doesn’t.
As someone who has been using a x64 Windows OS for near 2 years now and used x64 Linux as well – Go for it.
Vista x64 is very good and works will all I threw at it and now using Win 7 x64 and it is a great OS for Windows. MS needs to can x86 Windows 7 except for OEMs who might use it sub notebooks. It has no relevance in todays Laptops/Desktops.
If you “don’t want to be bothered with its potential problems” then you have absolutely no business writing about 64 bit in the first place. That’s sort of like writing an article about how beautiful the moon was before dawn without actually being outside to see it – you have a concept of it but no real experience.
64-bit makes sense if:
1. You have at least 4GB RAM (in fact, the more the better – 64-bit binaries tend to be bigger, both in their size and how much memory they use). 32-bit systems tend to be limited to 4GB RAM (and memory-mapped I/O “steals” from that, so you typically see 3.25GB RAM for example).
2. One or more the following about at least one or more of your applications is true:
a) An app uses 4GB+ RAM itself (e.g. database, massive photo editing etc.).
or
b) It has a 64-bit native version (meaning it can use extra registers and not serve the i386 lowest common denominator of instructions), which will usually be faster.
or
b) The 32-bit app [assuning no 64-bit version is available] actually runs faster in 64-bit OS’es (probably rare this one).
Note that RAM is very cheap at the moment, so OEMs and DIY builders have no excuse not to put 4GB+ of RAM in – Dell are lamely putting a Frankenstein 3GB RAM in some of their models (2x1GB, 2x512K sticks!) to avoid the 3.25GB limit and still therefore shipping 32-bit Vista (despite that it cost them only a few dollars more, if that, for 4x1GB or 2x2GB and 64-bit Vista is the *same cost* to OEMs as 32-bit Vista!).
You have to go to Dell XPS range at quite high prices for Dell to finally see sense and ship with 4GB RAM and 64-bit Vista (though Dell UK ludicrously default to 32-bit Vista, but at least 64-bit Vista is a zero-cost option).
I think the OEMs are partially to blame for this – yes, they need the 64-bit Vista drivers to work, but they should state this to any hardware manufacturers who want Dell to include their hardware: no 64-bit Vista drivers = we won’t include your hardware. Only the big OEMs have the clout to pressure the manufacturers to ship 64-bit drivers, but they seem to sit on their backside and don’t seem to care!
Once the OEMs get into lazy-backside mode, then the vicious circle starts (hardware makers say “OEMs only ship 32-bit OS’es, why should we do 64-bit versions”, OEMs repeat the inverse about hardware makers and so on). Windows has suffered this 64-bit malaise for years and years, whilst other OS’es, particularly Linux, have steamed far, far ahead in the 64-bit department (I bet there are more 64-bit apps in Linux than there are 64-bit apps in Windows!).
If you really want 64-bit to take off, then Microsoft should have either pledged not to release 32-bit Windows 7 (too drastic arguably, especially if they want it on netbooks, which for some unfathomable reason aren’t 64-bit capable…er, why?!) or priced the 32-bit version slightly higher to OEMs to encourage 64-bit adoption. Windows 7 definitely should be the last ever 32-bit Windows – by the time its successor comes out, 4GB RAM will be entry level and 32-bit OS’es will no longer make sense as I pointed out at the start of this ramble.
This argument kind-of works for new systems, but it doesn’t apply to peripherals. Printers are an excellent case in point.
Why should a manufacturer who sold a printer (along with a 32-bit Windows XP driver included on an accompanying CD) some years ago, and who now no longer makes that model, suddenly be required to produce a 64-bit driver for it? The manufacturer isn’t going to see any extra money for their printer, so where is the incentive? Why should a printer maker do extra work to feed Microsoft/Dell’s bottom line?
From a large (enterprise) users perspective … what is the incentive to move all your Windows machines to 64-bit, and then purchase a complete set of new printers and other peripherals?
Now if we are talking a Linux shop … no problem. There is virtually no penalty at all. The manufacturer typically doesn’t write the driver, or even if they do (did) it is still typically just a matter of a re-compile by the distribution maintainers …
Edited 2009-01-22 23:37 UTC
> Why should a manufacturer who sold a printer (along with a 32-bit Windows XP driver included on an accompanying CD) some years ago, and who now no longer makes that model, suddenly be required to produce a 64-bit driver for it?
Firstly, 64-bit XP has been out for over 7 years so they really should have at least a 64-bit XP driver by now. In theory, producing a 64-bit Vista driver based on the code for the 64-bit XP driver shouldn’t be too difficult (I’m not an expert on XP vs. Vista driver models, but I’d be surprised if it’s radically different between XP and Vista).
There are other alternatives too of course – buy a new printer that does support XP/Vista 64-bit (printers are cheap – they make their money on the rip-off ink cartridge prices), run 32-bit XP in a virtual machine and print from that, run 64-bit Linux ([possibly virtualised], which will probably have a 64-bit printer driver unless it’s one of these *awful* “Winprinters”, but it’s probably the only option that won’t cost you anything but your time).
I have built 10 Vista x64 machines and have had 0 problems due to it being 64-bit so drivers are ready, time to get the applications switched too Mozilla Adobe, MS!
I’ve been using XP64 since it came out…same time as Server 2003…I had initial problems finding x64 drivers for the first 6 months but that was years ago…pretty much everything these days has an AMD64 release as well.
The only problem I have now is an old favorite scanner…VueScan recognizes pretty much everything…just can’t import into PhotoShop directly…
Also…Adobe refuses to release an x64 RAW codec for Nikon cameras…arrgghh.
I do run a 6 PC LAN @ home so I just use another machine when the x64 problems surface.
Being able to use as much RAM or multitask is the main reason to go x64…
I also use VMware a lot and to run VM images in the background you have to dedicate RAM to the VM.