To provide the best experience for the most-used Linux versions, we will end support for Google Chrome on 32-bit Linux, Ubuntu Precise (12.04), and Debian 7 (wheezy) in early March, 2016. Chrome will continue to function on these platforms but will no longer receive updates and security fixes.
We intend to continue supporting the 32-bit build configurations on Linux to support building Chromium. If you are using Precise, we’d recommend that you to upgrade to Trusty.
The first signs of the end of 32bit are on the wall – starting with Linux. I wonder how long Google will continue to support 32bit Chrome on Windows. For some strange reason, Microsoft is still selling 32bit Windows 10.
For some reason? Come on. Microsoft goes to great lengths to ensure backwards compatibility.
In this case, I’m sure it’s because they have enough customers that still use 16-bit DOS or Windows applications. Intel was still selling 32-bit chips merely 3 years ago, too (Small ones, but, still…)
Systems with 4GB of RAM also benefit from being 32-bit, too. And, as Windows performance has increased, and system resources decreased relative to Windows 7, it means 32-bit systems that ran Windows 7 would still benefit from an increase.
Nah. It’s simpler than that: microsoft transition to 64bits has been a complete and utter mess.
How so?
For starters, putting 64 bit libraries in the %windir%/system32 folder and redirecting 32 bit apps to %windir%/syswow64
Or using LLP64 programming model instead of LP64 like almost everybody else.
Yes, that probably sucks a little bit for those writing compilers. Still, that’s a design decision and not “a mess”.
In addition, that breaks assumptions many application programmers made, because a long can no longer hold a pointer or an array index. I know it is a bad assumption to make because (u)intptr_t and size_t should be used, but then again it was a popular assumption. In addition to the fact that Microsoft compilers refused for a long time to support e.g. C99 printf modifier for size_t.
Edited 2015-12-01 06:51 UTC
Changing ‘long’ to 64 bit might help on the applications you refer to, but would be devastating to code bases that relied on them being 32 bit. An assumption just as popular.
Microsoft opted that the ones doing the worst crime (placing pointers in integers) should be the ones fixing their code. As for the array index situation, making ‘long’ 64 bit would not have saved the programs using ‘int’ for indexes.
I fully understand where Microsoft comes from. The existing Win32 codebase was most important to them, even more important than providing a clean start and cross-platform interoperability.
Another gem I have come across where Microsoft defends the choice of LLP64:
http://blogs.msdn.com/b/oldnewthing/archive/2005/01/31/363790.aspx
Here the argument is the use of structs for on-disk formats that don’t use explicit integer sizes.
ISO C90 specifies that size_t must represent the largest unsigned integer type supported by an implementation, historically this was the long type. So it was not a completely unreasonable assumption to make.
C90 makes no such requirement for int.
Certainly depends on which platform you’re coming from. In DOS and 16 bit Windows: short = 16 bit, int = 16 bit, long = 32 bit. That is precisely the reason the structs Raymond Chen mention use a long instead of an int. They were written that long back.
Changing long to 64 bit might not have had a big impact if you came from a unix background, but any code written in the DOS/16-bit era would end up in trouble. And lots of Windows applications comes from that era and therefore a perfectly reasonable choice.
Clearly we have different ideas about what constitutes a mess. I’ll leave it at that.
It is very much in the top of quality measures.
Other ways of measuring quality:
* Does it fit the oath? http://blog.cleancoder.com/uncle-bob/2015/11/18/TheProgrammersOath….
* Do the boss and product owner sign off on it?
…and the very best one:
* 10 years later, do I still think I did it correctly?
dpJudas,
Short answer: tell that to the hosts file
Long answer: The main issue I have with the scheme is that the mangling could have been avoided entirely if MS hadn’t decided to place 64bit resources in (previously) 32bit namespaces. This created the incompatibility that mangling was subsequently required to address.
c:\Program Files -> 64bit programs, 64 bit programs and dlls could have gone anywhere
c:\Program Files (86) -> 32bit programs, require mangling for backwards compatibility
C:\Windows\System32 -> 64 bit DLLs
c:\Windows\SysWOW64 -> 32 bit DLLs, require mangling
I’m hard pressed to think of any justification for the mangled scheme, which is confusing, complex, and unnecessary. In hindsight, it should have been grounded. Here is what MS should have done without the namespace over-engineering we ended up with…
c:\Program Files -> 32bit programs, no incompatibility at all
c:\Program Files (64) -> 64bit programs, no preexisting 64bit programs to be incompatible with
C:\Windows\System32 -> 32 bit DLLs
c:\Windows\System64 -> 64 bit DLLs
I actually don’t think mangling was deliberately planned, instead it was probably the natural result of an incremental development strategy whereby developers would first produce a working 64bit windows and only then would they address the need to “add 32bit”. Instead of going back and fixing the 64bit stuff they’d just add a new hack for the 32bit stuff being added to keep it from breaking on the 64bit OS. So it could have been avoided with a bit more planning.
An earlier thread on this topic:
http://www.osnews.com/thread?616089
Edited 2015-12-01 14:49 UTC
So, with regard to “Program Files” locations, virtually every single installer I’ve seen installs in %ProgramFiles% rather than C:\Program Files
and the WoW64 system sets %ProgramFiles% to “C:\Program Files (x86)” for 32-bit processes – no mangling necessary, just a different environment at launch. Very few installers hard code the path instead of using %ProgramFiles%, and those that do don’t get their chosen path mangled.
Now, hardcoded path to known system files is different, fairly common, and can’t be changed by the end user at install or after the fact, which is why System32 is retained for 64-bit system files and WoW64 steps in.
I believe it was, as this was the way to ensure existing 32-bit binaries continue to work, and to ensure that sources could be recompiled with little to no changes for 64-bit.
The scheme you describe requires either extra work for the developer to make a program written for 32-bit windows to build for 64-bit windows, or a crystal ball to know well in advance that x86 would turn into a 64-bit architecture and architect Windows around it from the beginning – remember, NT 3.1 came out in ’93, after a few years of development. The MIPS R4000 is considered the first genuine 64-bit microprocessor, and was released only a year and a half before NT 3.1 and Win32, and wasn’t in a system that at all resembled a PC that Windows might eventually run on (something like 4 feet tall, 3 1/2 feet wide, 3 1/2 feet deep).
There was no reason at the time to even imagine that Windows would have to face these issues.
Actually yes, they should have. Because in 1993. Microsoft was designing and targeting NT on a 64-bit platform.
But, they weren’t bringing a 32-bit NT with existing 32-bit software onto a compatible 64-bit platform.
NT on Alpha was purely 64-bit, as the Alpha itself was purely 64-bit.
And for MIPS, all versions of NT required an R4000, which was 64-bit. I can find very little on the web about NT on MIPS beyond that it existed and was sold for a short time, but I’d be surprised if Microsoft had a 32-bit ABI for MIPS alongside the 64-bit ABI, since NT required a 64-bit MIPS chip.
And, if that’s the case, no, they didn’t know in 1993 that they would have issues of supporting 32-bit and 64-bit ABIs on the same system.
Installing and running 32 bit x86 software on Alpha was already achieved by DEC with their FX!32 technology.
Then we had Itanium which did the same thing natively. On Linux ia64 systems and Itanium processors with x86 hardware emulation you can directly execute x86 binaries.
So there was plenty of experience, years before AMD came around and introduced their 64-bit architecture.
FX!32 on Alpha was a DEC creation, and used binary translation begin converting binaries before Windows’ own loader was invoked. FX!32 apps called natively to Windows DLLs and the kernel.
As another poster pointed out, NT on Alpha was essentially 32-bit (Alpha itself didn’t have a 32-bit mode, so I’m thinking it means Windows was using 32-bit pointers internally, and 32-bit integers in data structures), so this translation would’ve been relatively easy, since everything woudld’ve retained the same size (No need to modify code to start handling 64-bit integers instead of 32-bit, for example)
Windows also ran x86 on Itanium systems, using – you guessed it – WoW64, which was introduced with Windows Server 2003, which was also the first Windows with an x86-64.
You’re mistaken, NT on the RISC machines (MIPS, Alpha, and PPC) was 32-bit even though some of the chips themselves were 64-bit capable (MIPS & Alpha). By the time NT was being designed, there was no doubt high performance architectures were moving to 64-bit. Among other things, because NT itself was being developed on an 64-bit platform.
And yet, it took a decade for NT to move to 64-bits, and did so in a kludgey way.
Edited 2015-12-02 18:14 UTC
I stand corrected.
But, my point still stands: NT on Alpha and MIPS didn’t run 32-bit and 64-bit applications concurrently.
Edited 2015-12-02 18:33 UTC
Your point is moot. NT on those machines only ran 32-bit applications.
I’m simply highlighting that in the scalability scheme of things, NT was a poorly architected system.
Drumhellar,
Well, you’ve already mentioned the solution: use environment variable paths. This has been the best practice since the 90s. There’s no technical reason software can’t be installed almost anywhere in the system, including other drives. As you may recall, this used to be the norm for all software. Unzip and run…anywhere. c:\apps\… desktop\game\… It doesn’t matter. The software could be 16bit or 32bit, and it still works with 64bit executable too.
It is easily demonstrated that installation paths are not a technical requirement for most software, but rather an arbitrary convention. I’m all for conventions for the sake of consistency. But the convention MS came up with is bad. It created an install path dichotomy between 32/64 programs, which has no technical merit to begin with, but whatever. The more egregious problem is that the convention uses paths that conflict with one another and requires mangling at the file system level to reconcile the real FS with what apps expect to see.
Given the fact that the app path convention was totally arbitrary (ie not technically constrained), it would have made far more sense to adopt a convention that could satisfy both 32 and 64bit without any mangling. Anyways what’s done is done, the mangling is a permanent addition to windows and there’s not much MS can do to fix it now.
Luckily, it isn’t baked in at low levels – it is merely a subsystem in the same way that the old SFU/SUA was. On Server editions, WoW64 is an optional component, and I can see Microsoft eventually making it optional on the Desktop (Or some enterprising hackers making it optional instead), because as it turns out, it isn’t something that is tightly coupled to OS.
Drumhellar,
The mangling was never necessary for backwards compatibility, it’s a byproduct of poor planning by the 64bit development team itself. I find it somewhat contrived to think of it as a “compatibility solution” because it merely “solved” a problem that 64bit windows itself created.
It’s as though MS released a new OS with a login screen that would not allow users to login on the Sabbath, and simultaneously produced a kernel workaround to fix said limitation by mangling the date exposed to the login process. Sure it “fixes” the problem, it’s also a mundane detail that end users don’t need to concern themselves with. But it’s a convoluted hack for a problem that would not have existed had they not introduced it completely unnecessarily. While I hope you realize this example is meant to be preposterous, it does convey my feelings about what MS did. There would have been no inherent 64bit compatibility problem had MS not introduced one with 64bit windows.
This is the consensus with the stack overflow crowd as well.
http://stackoverflow.com/questions/949959/why-do-64-bit-dlls-go-to-…
Edited 2015-12-02 16:38 UTC
Interesting analogy, but I’d say its more like they developed a system that wouldn’t allow logins on Sundays (which most Christians consider the Sabbath, since Jesus was resurrected on a Sunday), but automatically detected it was either a Jew or a Seventh Day Adventist, and prevented them from logging in on Saturday instead.
Or something. Both analogies are rather awkward.
And, as for your stack overflow link, there is an overwhelming level of indifference to the situation, with System32 being used for historical and compatibility reasons, and WoW64 because it is an acronym for “Windows on Windows 64”, and not expressing a whole lot of opinion from any of the answers, or from most of the commenters.
Drumhellar,
Keep in mind it’s Vista we’re talking about here, the 64bit transition was messy as hell. My company even faced 64bit problems and they were a .net shop. Luckily it’s a decade behind us.
So anyways, you want to agree to disagree on this?
If those are the worst complaints you have about their transition, then I guess they pretty much nailed it, then.
I mean, keeping the system path for native apps as System32 means nothing breaks when recompiled. When Windows moved from 16-bit to 32-bit, the API changed significantly, so it was cool to rename system file paths for 32-bit software, as you had to re-write Win16 apps to be 32-bit. However, the API stayed essentially identical on the move to 64-bit – software expecting system files in C:\Windows\System32 would break when compiling it to 64-bit otherwise. SysWOW64 is where 32-bit calls to System32 get redirected. Since the 32-bit system is the non-native system, they are the ones that get redirected.
LLP64 accomplishes the same thing: Compatibility. Software can be built for either 32-bit or 64-bit mode, with no changes necessary to the source.
It makes sense, too. Compatibility with existing Windows software really is more important than easier porting from other systems.
As I wrote above, these are not my biggest complaints, these are examples that illustrate nicely how Microsoft handled the transition.
By prioritizing that applications can be ported with as little modification as possible, they have now the SysWOW64 and LLP64 baggage to carry around basically forever.
Also the “no changes” argument doesn’t hold. LLP64 just breaks different assumptions than LP64 for program(mer)s which came from a 32 bit world.
For starters, the XP 64-bit roll out was a complete mess, the whole “windows on windows” approach just highlighted what an utter architectural disaster Windows as an OS really is. It’s 2015 and the majority of MS’s development chain is still, mainly, 32-bit.
MS never had a synchronized company-wide transition to 64-bit, or any actual 64-bit game plan for that matter. I.e. OEMs are still selling 32-bit Windows on Machines that have used 64-bit processor families for almost a decade now. Etc, etc, etc.
I understand this is not something MS fans like to hear.
That’s debatable as a 32 bit Windows only offers half of its memory to applications. Add the problem with fragmentation in a 32 bit flat memory model and it gets even harder to use all the memory. Only running many smaller processes seem to be an advantage here.
You also lost that the compiler can always take advantage of newer registers and instructions (SSE, AVX).
Steam is hardly representative of the PC market.
Besides, 64 bit systems with 32 bit UEFI cannot run x64 Windows, so either Microsoft needs to fix that or provide 32 bit support indefinitely.
Similar problems for old 64 bit systems that don’t support CMPXCHG16b (support for those was dropped with Windows 8.1).
True, but on the other hand, being 32-bit also conserves bandwidth. It is a toss-up, of course. Extra registers from 64-bit mode helps in some situations, extra bandwidth available in 32-bit mode helps in others. 32-bit programs often use less memory, though.
But, again, few organizations are as good as Microsoft when it comes to maintaining backwards compatibility.
I agree. I just wish they’d sometimes nudge the industry in the right direction. Windows XP 64 bit edition is now over a decade old. Because they keep releasing new versions with only 32 bit support they effectively force developers to continue building 32 bit executables.
If it were BIOS (which is 16-bit), Microsoft can get 64-bit Windows loaded on that.
UEFI, though, bitness of that matters. 32-bit UEFI can only boot 32-bit Windows, 64-bit UEFI can only boot 64-bit Windows. (Linux and OS X both have hacks to get a 64-bit OS running on a 32-bit UEFI.) Because of the storage space improvements for 32-bit Windows, and the RAM usage improvements (I’d say that there’s almost zero benefit to 64-bit Windows on a 2 GiB or less device, and limited benefit on a 3-3.5 GiB device), manufacturers are using 32-bit UEFIs to get 32-bit Windows (because Microsoft’s pushing UEFI) and improve the storage space and RAM available in their low-end devices.
I think Microsoft’s the only one shipping a 64-bit device with 2 GiB of RAM (the Surface 3), and that’ll mainly be because they wanted 64-bit for the 4 GiB version.
A prime example of how 32 bit was more important in the Windows world: the PC I use currently has an XP 32 bit video driver, but no XP 64 bit driver. 64 bit wasn’t significant in the Windows world until Windows 7.
SSE4 and AVX can, I believe, be run from 32-bit code, for what it’s worth…
You can run AVX in 32-bit mode, but it is gimped; e.g. you can only access 8 xmm and ymm registers, not the expected 16.
Memory fragmentation is deliberate these days. It’s a security feature to help try to prevent buffer/stack overflow exploits. They fragment the hell out of memory and randomize the mapping to keep code/stack/buffers from landing in a predictable place, denying exploits an easy vector. They count on huge CPU caches and high memory bandwidth to counter the speed drop fragmented memory causes. If you look at CPU caches, they’ve gotten huge compared to even five years ago.
Memory obfuscation is not necessarily the same as memory fragmentation. Also, in the consumer space at least, cache sizes have stayed remarkably stable in size for almost a decade; between 4 and 8MB.
On the HIGH END of CPUs, yes. It’s the limit of what they could add in the space available. On the LOW and MID ranges, caches sizes over the last decade have gone from 16 to 64 KILOBYTES to 1 to 2 MEGABYTES… a tremendous growth. It’s the low and mid ranges that affect the consumer base the most, not the high end.
Edited 2015-12-02 19:08 UTC
I don’t think you comprehended my post in the least.
Also what level of cache are you talking about? And what are those fabled CPUs you’re referring to? Cache size wise, the growth between cache in the Core2 and Core i microarchitectures really has not been dramatic in the least.
And I don’t think you comprehend mine. I’m talking about mass market CPUs – think sub $100 range (for the CPU, not the whole computer). Those have hundred times as much cache now as 10 to 15 years ago, ESPECIALLY AMD. Intel is closer to 10 times (depending on which models you compare), but that’s still a huge change.
*sigh* You started with 5 years ago, now you’re moving the goal posts back 15?
Again; within the past 10 years, cache sizes have not grown that much, be it Intel or AMD, in the low end x86 consumer space. Which is the point I’m trying to let you know. Do you want me to write it slower?
Edited 2015-12-03 06:35 UTC
Or when they just can’t be bothered to produce 64bit versions of libraries that have wide usage. Case in point, the common control and CAPICOM libraries that were never updated, requiring that 32bit versions of MS Office be used. So if your enterprise has developed office solutions that use them, and Microsoft some day decides to stop providing 32bit Office binaries, then you are screwed.
My guess is that Microsoft realizes there are still many 32-bit machines out there that could be converted to Linux (or some other Un*x) if they don’t maintain support. A loss leader for the sake of maintaining market share? I don’t know anything about their 32-bit version of Windows 10 – perhaps it runs much lighter than their 64-bit version?
A few years ago I was in a government workplace that was transitioning from XP to 7, and someone in the IT department employed their own ‘reality distortion field’ and installed it on my 32-bit workstation, and claimed I’d experience no slowdown or other problems. That turned out not to be the case. It was slow, it froze up frequently, and generally made my life hell until I escalated the issue myself and had a 64-bit machine installed at my desk.
All that to say that I’m not sure I’d *want* Windows 10 on my 32-bit box, even if I were a Windows fanboy, based on my experience with 7. (As it is, I’d take renal failure over Windows, period.)
On the other hand, the world of FOSS offers many options for keeping an old box running a current operating system, and I think Redmond knows this.
Edited 2015-12-01 01:33 UTC
The only reason I knew to have a 32-bit version of Windows is for:
16-bit Windows applications (think Windows 3.1).
And DOS-applications.
Because both are not supported anymore on 64-bit Windows.
Now that I think about it: maybe some old drivers for some special hardware ? Driver models have changed between Windows versions so I don’t know how realistic that is.
Anyway, yes, maybe you are right. People could still buy non-64-bit machines in 2006 I guess.
Edited 2015-12-01 09:29 UTC
If it’s not video drivers, the Windows driver models have stayed pretty stable since Windows 2000 or so. So, that’s absolutely a reason to stay on 32-bit.
And, the reason to ship a new device with 32-bit Windows (lots of tablets and netbooks shipping that way) is to reduce the amount of storage space required – a 64-bit Windows install has to have both 32 and 64-bit libraries, and with the razor-thin margins that these devices have…
Ohh, no.. what a horrible idea 🙁
Where? Desktops? Laptops? Servers? Mobile?
I don’t see it.
You’re thinking of NEW systems where everything is 64 bit now. Even ARM is now almost all 64 bit. However, there’s still a HUGE number of 32 bit systems still in use. Hell, my last PC from 3 years ago was a 32 bit Intel system. It’s now my backup system, but I still have it set up.
I was just bitching about this at work. Like why are companies still devoloping everything for 32bit? I remember there being a few games (I want to say Red Mercury was the name of one) that had a native 64bit version that had extra textures and objects in it than the 32bit version. Yet that still is far from common. Anything that uses DirectX 11+ should be 64bit in my opinion. Face it, if you’re still running a cpu that is 32bit only, you’re not playing any modern games on it anyhow… meanwhile, I see more games under Linux have 64bit binaries, yet Steam is still 32bit (I think. It certainly depends on 32bit libs, but that could be for compatibility).
32bit software is usable by both 32bit and 64bit users. 64bit software is usable only by 64bit users. Considering the vast millions of people still using 32bit, why would you voluntarily limit your own potential customer base? Most of the software the Average Joe uses sees little-to-no benefit in the 64bit realm. For those guys all 64bit means is more ram usage.
Here’s something else to think about… Although newer 64bit systems are common place in first-world countries, there are many places that gladly welcome our `old` computing hardware. It’s easy for wasteful people to declare something `dead` simply because they no longer use it but that’s nothing more than looking through rose-colored glasses.
The rush to 64bit makes sense where computational speed matters; > than casual gamers, people needing to do high level math, where data transfers speeds actually matter, etc… But none of that describes a typical users needs.
Hi,
More registers (and less “spilling” to stack), more efficient calling conventions, much better support for 64-bit integers (especially division by a 64-bit divisor), plus things like SSE and SYSCALL are known to be supported at compile time.
Note that (in theory, and on Linux in practice) it’s possible to have 64-bit code that uses 32-bit addressing (and 32-bit pointers) to retain all the benefits of 64-bit code (without the slight increase in memory usage that larger/64-bit pointers would cause).
– Brendan
Yes, there are technical advantages 64bit brings, but nothing you’ve mentioned manifests into a better experience for the average user and their very basic & minimal needs. For years now computers have had more than enough power to meet those needs. Until peoples most common computing needs change and become much more demanding, 64bit simply doesn’t provide any noticeable benefit to average users. That’s all I’m saying.
The proof is simple… Have an Average Joe check their Facebook on both 32bit and 64bit (comparable) systems, then ask that person which one was better. Unless you intentionally make the test lop-sided, the answer is going to be neither, they’re the same. The test will yield the same results whether it involves Facebook, paying bills, playing Solitaire or Candy Crush, or anything else the average user does.
ilovebeer,
I agree. In practice there is more than enough computing power for typical user needs. The difference is marginal for most applications that don’t even benefit from 64bit registers. Having more registers is really great, but the far more perceivable bottlenecks are with disk and network IO. Graphics are generally offloaded to GPUs.
IMHO the biggest selling point for 64bit is the ability to directly address >4GB ram without using page extensions, yet this benefit is easily lost on consumers who are still using 4GB machines.
32bit market share is still in double digits. Consolidation onto 64bit is fine and all, but the indifference is understandable.
One area where 64 bit shines are the huge new open world games. Example: Fallout 4. This type of game is very problematic in 32 bit – you really need a high speed SSD drive to make it not grind to a halt every time you cross a cell boundary. Skyrim was 32 bit and suffers as a result. Not just stutters from drive loading, but limits on the number of mods and size of textures and meshes you can use with the game (very important for Bethesda games). Even Fallout 3 suffers from 32 bit, and it’s 7 years old. I can understand Fallout 3 being 32 bit as 64 bit Windows wasn’t significant 7 years ago, but Skyrim should have been 64 bit. It’s good to see Fallout 4 is 64 bit.
Hi,
Um, what? This is not “proof” any more than putting bananas in your ears and searching your lounge room for sharks is “proof” that the bananas effectively prevent shark attacks.
Note that “the average PC user” doesn’t exist. Some play high-end computer games, some use CAD software, some do movie editing, some run compilers, some are analysing stock market prices, some are simulating electronic circuits, etc.
By limiting things to “minimal needs” and then presuming that the “average PC user” is someone that doesn’t need a PC in the first place; you’ve constructed an artificial and irrelevant work of fiction that “proves” nothing at all.
Also, understand that I’m not saying the performance improvement is so significant that everyone needs it; I’m only saying that (for 80×86) the are more benefits (significant or otherwise) to 64-bit than just larger virtual address spaces alone.
– Brendan
Nobody is debating that. Again, I’m talking only about 32bit vs. 64bit as it pertains to the most common tasks a typical user performs.
64 bit has both performance and security advantages (address randomization techniques become more effective). This is of course relevant to common tasks that a user performs.
Are you replying to my post(s) without actually having read them? Yet again, I’m not talking about technicalities. I am talking about the perspective of the Average Joe. “Better” performance and address randomization is meaningless to somebody who uses their pc to check email, troll around on Facebook, and other common tasks. Users will have the same experience regardless of using 32 or 64 bitness. I have yet to hear a single person say they prefer checking email and Facebook updates in 64bit because 32bit is inferior in some way.
You mean the average Joe who insists on having an Octa Core CPU instead of a Quad Core in his smartphone because, it must be twice as fast!
Or who prefers a 560 dpi QHD screen instead of a 400 dpi FullHD one, not that his eyes can tell the difference anyway.
Do not believe for a second that it has escaped Average Joe whether the system is 32 or 64 bit. On Windows, it says right there in the properties of “My Computer”. Average Joe may be partially or totally oblivious to the advantages that it brings, but he still has a vague idea (reinforced by marketing) that 64 is better than 32 in some way.
Firefox ran out of address space when producing 32 bit PGO builds several times in the past.
https://bugzilla.mozilla.org/show_bug.cgi?id=543034
https://bugzilla.mozilla.org/show_bug.cgi?id=709193
https://bugzilla.mozilla.org/show_bug.cgi?id=709480#c18
I guess Chrome is facing similar issues when building for 32 bit platforms (AFAIK Chrome already dropped support for 32 bit build environments and 32 bit PGO).
And 32 bit is still supported if you build Chromium yourself, or use Windows or ChromeOS (ARM/x86).
As seen from the comments, most readers know dozens of very valid reasons for this, but Thom somehow fails to grasp a single one of them. And this person is site editor? Seriously?
Sometimes I think he’s just a bit cheeky to get a discussion going. 🙂
Not really strange. There is Connected Standby feature, which is mandated for Windows-based tablets and which is still i386-only.³ Intel doesn’t even ship amd64 firmware for its Atom SoCs,â´ and (unlike Linux)â´ amd64 Windows can’t boot off i386 UEFI.âµ
¹ ftp://ftp.pcbsd.org/pub/handbook/9.2/pcbsd9.2_handbook_en_ver9.2.h…
² http://www.dragonflybsd.org/release38/
³ http://www.anandtech.com/show/8038/windows-81-x64-connected-standby…
â´ https://wiki.archlinux.org/index.php/Unified_Extensible_Firmware_Int…
âµ https://technet.microsoft.com/en-us/library/hh824898.aspx
From your link, re: Connected Standby:
“Today, we can confirm that Connected Standby is now available in 64 bit installs of Windows 8.1. Anand has confirmed that it is working in the new Surface Pro 3, and Intel has also confirmed to us that the drivers and firmware are complete to allow Connected Standby on Bay Trail-T processors.”
At my location you can’t buy a Windows tablet with 64-bit firmware from Intel. So at least here that never happened.
The Surface Pro 3/4 and Surface 3 aren’t available? Or any of the enterprise tablets with 4 or more GiB RAM?
Edited 2015-12-01 22:22 UTC
If I go to the mall, I still see 32 bit netbooks on offer. Obviously people are still buying them. And it makes sense to me. E.g. for traveling they are a lot more versatile than tablet computers.
The main reason is that there is a lot of new hardware production that is 32bit only. A lot of tablets, All-in-1 and low-end notebooks has 2GB od RAM, 32bit CPU and 32GB of flash storage. This is simply no-go for 64bit version of Windows.
There is a memory pressure for 64bit system (you have to have two kind of supporting libraries in memory together). There is storage pressure (the 64bit code is much longer/sparse than 32bit because of longer addresses). And the Intel still selling badly manufactured (ie. half broken) CPUs as “32bit” I believe (besides 32bit only production).
Making a clean install “from scratch” with Win10-32 doesn’t make much sense today. There are some very specific cases where it still could be useful but this doesn’t concern the very vast majority of users but nobody has spoken about upgrade paths (or I missed it)
Don’t forget that most Win10 out there today are upgrades from previous versions of Win7/8/8.1. There were still quite a lot of Win7-32 out there and even 8/8.1-32 (also from previous upgrades).
There is just no direct upgrade path from a 32bit Windows to a 64bit Windows and most users don’t really know how to back up their data, make a fresh install and reload their data. They got their Win10 from Windows Update.
Edit:typo
Edited 2015-12-01 12:21 UTC
To provide the best experience…
Nowadays in name of “providing the best experience” companies justify every move which they do even that it has nothing to do with “experience”.
Well, technically it is probably correct because now they can put all their people on 64bit.
So for 64bit people this means they will get a better experience
For 32bit people …. too bad
In reality, this is just a “we provide you with this for free so we can do whatever we want and we don’t want to do 32bit anymore”
I have a 2007 mac Mini that used to be able to run 64-bit until Apple ugraded/cripped the firmware to run only 32 Bit operating systems. I guess it is finally time yo upgrade to a 64-bit EFI mac.