Here’s the interesting part. This motherboard doesn’t officially support 16 GB of RAM. The specs on the page I linked indicate that it supports a maximum of 8 GB. It only has 2 slots, so I had a suspicion that 8 GB sticks just weren’t as common back when this motherboard first came out. I decided to try anyway. In a lot of cases, motherboards do support more RAM than the manufacturer officially claims to support.
I made sure the BIOS was completely updated (version 946F1P06) and put in my two 8 gig sticks. Then, I booted it up into my Ubuntu 16.04 install and everything worked perfectly. I decided that my theory about the motherboard actually supporting more RAM than the documentation claimed was correct and forgot about it. I enjoyed having all the extra RAM to work with and was happy that my gamble paid off.
Then, a few months later, I tried to boot into Windows 10. I mostly use this computer in Linux. I only occasionally need to boot into Windows to check something out. That’s when the fun really started.
A deeply technical exploration into this particular issue, and definitely worth a read.
nice article about the interaction between OS and hardware!
Windows should make use of the Linux kernel 🙂
Seriously, both the Grub boot loader and the Linux kernel offer such strong robustness and features, compared to the Windows equivalents; why even bother with Windows? I would probably be lazy and install Windows in a KVM guest machine. But I have to say the author did a very fine job with those ACPI tables. Really an interesting read, thanks for sharing!
evert,
It’s not that uncommon to have these kinds of bugs in BIOS tables. The most correct fix is obviously for the manufacturer to correct the tables and release a new BIOS. However, often from the perspective of kernel developers who don’t have enough influence to get the manufacturer to publish a fix (ie, linux users like the author), they’ll need to fix it with kernel quirks so that it identifies the flaws and works with the broken BIOS anyways. This is rather annoying for because sometimes things don’t work even when the kernel code is technically correct. It’s interesting that in this case of this article, linux apparently was already compensating for this quirk whereas windows did not.
I’ve done this once personally to fix an intel storage server in my linux OS where the manufacturer decided to have a flash disk that was visible only in emergency mode. They were using two different sets of BIOS tables, but it kind of crippled the server so I ended up adding a linux quirk to recognize all the disks simultaneously.
If anyone’s curious, this is what the patches look like in linux.
https://pastebin.com/3khKpLZa
In particular, I read the erroneous value, then force the bit to enable the disk, then overwrite the value.
As I remember, the problem would happen waking up from sleep as well, so the patch hooks a couple different events.
Here’s another patch where I needed to reverse a linux quirk because it was causing a kernel OOPS. In this case I don’t really know if linux was at fault or the BIOS, but it demonstrates how even code designed to fix an incompatibility somewhere can cause it to break elsewhere.
https://pastebin.com/SWusmCmJ
Both patches show that very tiny changes like a single bit in memory or a single line of code can cause bad outcomes. It’s a wonder our operating systems work at all, haha 🙂 Luckily I haven’t needed to make that many kernel patches to address hardware issues on x86 machines. There’s always something though, this week I’m fighting with ubuntu linux to get it boot into a bcache root, I’m disappointed it doesn’t handle that out of the box.
Ah, BIOS tables …possible reason why one cheap ASRock GF6100/K8 motherboard in an inexpensive PC I build for my buddy kept nuking (they would stop booting), after a few days, 2k3 and Linux installs, but worked fine under XP… seems it was “meant” only for the latter / possibly not even tested with OS other than mainstream consumer ones, for which firmware configs were incorrect.
Unfortunatelly I wasn’t able (still aren’t… 🙁 ) to properly diagnose and fix it…
zima,
Many of the driver issues in windows vista and windows 7 were self inflicted, sometimes the exact same drivers that worked in XP worked just fine in windows 7 if you used a signature bypass hack. Microsoft deliberately chose to xp drivers even when they technically worked fine.
I don’t know if you know this, but I used to be a windows user exclusively. It was just as I was starting to get comfortable with windows kernel development and writing my own drivers that microsoft began blocking our code from running on our own machines. Personally I was furious, by treating open source developers as second class citizens on our own machines, they lost many of us to linux.
(not sure why my post made you write about drivers; we didn’t even try Vista on that machine)
Hm, until MS made that change, badly written 3rd party drivers were one of the biggest (if not the biggest) cause of Windows instability… But it greatly improved since then.
Anyways, what drivers for Linux have you written in the meantime? 😛
zima,
You mentioned things breaking that once worked in XP, which is why I mentioned microsoft’s driver breakages. I experienced the same.
I mostly stick to mainline as much as I can. However occasionally I need to work on driver patches for my OS. For example I maintained a union file system driver until linux got one mainlined. My next kernel project could involve block caching, which is motivated by my desire to make the most out of my new rig. I’ve basically completed my investigation into the two mainline options already, and I’m quite disappointed with both bcache and lvmcache because neither of them are particularly well optimized. This sentiment is echoed by many people who’ve benchmarked them. Bcache is mostly abandoned by it’s author. Lvmcache’s hotspot cache has potential, however the incomplete write caching leaves it struggling to accelerate even trivial use cases. The toolset implementation also leaves a lot to be desired,
There’s another effort called enhanceio, which is an out of tree alternative, which I haven’t even taken a look at yet.
That wasn’t about driver breakages, but most prominently about Linux installs breaking / not booting after a while for seemingly no reason; anyways, I’m quite sure that Vista would work fine (well, as good as it could with 1 GiB of RAM), it’s the ~niche OSes that didn’t (and anyways, XP we tried only at the end…).
Also – yikes, you’re messing with storage subsystem; scary stuff… 😛 (because of potential for data loss)
zima,
Well, a lot of the hardware trouble under vista had nothing whatsoever to do with actual incompatibility, only that ms would not owners install the OEM drivers. So it might have been the issue for you too. Anyways, it’s ancient history now. If MS hadn’t impeded FOSS kernel development in windows, who knows I might still be doing it.
Somebody’s got to do it, CS people embrace the challenge 🙂
@post by Alfman 2019-04-11 9:19 am
But it wasn’t “hardware trouble under Vista” since we didn’t even try it! 😛 The issues were under Win2k3 (with MS drivers) and various Linux distros (under both 2k3 and Linux presenting itsell IIRC kinda like disk corruption after a few days, but the HDD was certainly OK…)
And well, personally, no matter how much CS training I would receive (I’m not even a code monkey…), I feel I would be too incompetent to mess with more scary parts of software stack. Maybe I’m not good CS material 🙁 (still, going that way is probably one of best options for me, when working from home would be optimal, due to ilness…)
Reminds me of one of my old ThinkPad laptops. The screen went bad and I had to replace it, but after doing so, Windows refused to acknowledge that the screen even existed, but GRUB and Linux saw it and used it just fine. After some debugging, I figured out I had damaged one of the pins in the connector for the screen on the mainboard that was used for detecting the state of the internal display. Linux and GRUB just assumed the firmware configuration for the GPU outputs was correct (and it was, because the firmware just assumed there was a display connected), and thus had no issues using the display. Windows, OTOH, didn’t trust the firmware, and reinitialized the GPU during bootup in such a way that the output configuration got reset, and then failed to detect the integrated display (because of the damaged connector), causing it to refuse to acknowledge that the display even existed.
ahferroin7,
Interesting problem, but I think the reason linux worked whereas windows did not could be because (I’m guessing that…) linux was not using it’s own drivers and was using vesa bios instead. That’s certainly the case for GRUB, the bootloaders don’t implement their own drivers, they call upon the BIOS. Unless something’s terribly wrong, one can reasonably assume that the drivers hardcoded in the motherboard’s firmware will always work with the motherboard’s hardware. I’ve encountered issues with linux graphics drivers too many times along the years, but reverting to VESA (which is used by live-cd/usb environments) has always been reliable as long as I can remember (although some early systems lacked hi-res modes).
Excellent article!
Cool, until the next grub decides to put the bios table in a different spot updated. I did something similar to work around a broken bios, which did work fine in fedora for a while, then they changed something and it broke. I had to use an ubuntu boot disk to examine what changed. And I observed that ubuntu’s grub handled my broken bios without my hacks, So I just switched to ubuntu and saved myself the trouble. I think the time I made the hack I was opposed to ubuntu for silly reasons like mir and upstart. But now that their on the wayland/ systemd all is fine.
Bill Shooter of Bul,
I kind of hate grub. IMHO it transformed from something simple into a monstrosity. I use syslinux in my os, but regrettably the major distros have dropped support for grub-alternatives.
Alfman,
Thanks for your comments and information above 🙂
Regarding Syslinux… well I don’t know if you regard Arch as a major distro, but I’ve switched to Arch for my desktops and laptop. Arch is very flexible and yes, Syslinux is in the wiki:
https://wiki.archlinux.org/index.php/Syslinux
evert,
No need to thank me for my ramblings, haha 🙂
ArchLinux … you know I’ve never tried that one. It might be a good fit, When I was trying out a large number of distros in the 2000s (and using many, many install CDs in the process), I don’t believe archlinux ever came up. It was probably in nascent stages of development.
Arch is really nice. I’ve used Ubuntu, Fedora, Slackware, OpenSuse and whatnot; Arch is close to the Slackware experience but with a very good package manager. The Wiki is brilliant. If you are an experienced Linux user, look no further. If you install Arch for the first time, make a note of all packages you install in a text file; that makes it easier next time. (Installation is hard; a GUI is *not* installed by default. But you get a lot of control back in return.)
They say Arch is less stable, rolling release with not much quality testing… honestly for me Arch is more stable than Ubuntu.
Only for servers I like enterprise Linux (RHEL / CentOS / Scientific Linux) because of SELinux. Although I don’t like to deal with SELinux, I have to admit that it is really important for servers.
Maybe I’m biased, but I consider Slackware a major distro (it’s the oldest surviving and most UNIX-like distro today) and it defaults to lilo/elilo. I’m typing this from Slackware 14.2. 🙂
my old i7-860 machine officially supports 16GB of RAM. According to both Intel and Gigabyte (the mobo manufacturer). I had a couple slots open so put in two more 8GB sticks for 32 total. Windows booted but was incredibly slow. Found this Reddit thread where someone had 32GB working fine until he updated to a newer GPU with 4GB VRAM (same card I have in my machine). https://www.reddit.com/r/techsupport/comments/3o6vtt/32gb_ram_gtx_970_continued_partly_solved/
Like him, I was able to get it running full speed by limiting Windows to using 30GB of the 32 (msconfig -> boot -> advanced options -> limit memory). https://imgur.com/a/OiHV1
Was a little easier than the linked article at least
Only downside is every major Windows update so far (the twice a year kind) has reset that setting and I have to remove a stick until I can get in and tell it to limit itself to 30GB again
And here I am with a i5-6500 based machine with “only” 8GB of a possible 64GB, and plenty of breathing room. Granted, I run Linux on this machine and not Windows, so my day to day RAM usage is below 2GB unless I’m really pushing the machine (gaming or working with large video/audio projects). Still, I have yet to bottleneck on memory and I see no need in doubling it, let alone trying to push past that 64GB barrier. When I was younger and built my own systems exclusively, I would spend days doing things like that just for fun. Now, I’m satisfied with a major brand workstation that does what I need it to and sips power, relatively speaking.
That said, it wasn’t too long ago I had a dual-processor Mac Pro tower and registered DDR3 RAM had dropped to a low price point, tempting me to max it out at its physical 128GB limit (of which macOS would recognize 96GB). Common sense prevailed and I didn’t spend the money, but it would have been by far the most RAM ever in a single machine in my collection.
…or heavier browsing; otherwise also Windows stays below 2 GiB.