As inevitable as the tides rolling in: every time a new Windows version is released, someone with too much time on his hands tries to install it on extremely outdated hardware. Sure, it won’t be usable by any standard whatsoever, but it’s still a fun thing to do. Of course, Windows 7 couldn’t lag behind.
Windows 7, released October 22 2009, is the latest and greatest operating system offering from Microsoft. The official system requirements indicate that you need a 1Ghz processor, 1GB of RAM, and 16GB of available disk space. A member of The Windows Club Forum defied those requirements, and managed to get Windows 7 running on a Pentium II 266Mhz.
The PII 266Mhz was released by Intel in 1997, so we’re talking about 13 year old hardware. The forum member’s machine had a 4MB graphics card, and Windows 7 “ran” on both 128MB and 96MB of RAM, but complained of too litle memory when the machine was configured with 64MB.
His next attempt will be trying to get Windows 7 running on a Pentium I 166Mhz with a 1MB video card. Useful? Of course not. Does it prove something? Not really. Still, it’s cool.
Yes, it is fun to see what can be done with old hardware. Windows 7 is a nice OS. I have a RC on an Evo610c and it’s running very well. On my Thinkpad 600 with 266 Megahertz of power I’m running PclinuxOS Minime 2009. It’s my airport laptop that doubles as a small table for my coffee and muffin. With 224mb of memory it’s very usable, it’s what I’m using to comment with right now.
If it works… it’s ok!
Ignoring new uses, and the need to Processing power, it is the OS that is pushing the limit.
I always wondered about the big mess in development.
It is BIG in size… and that is alway a trouble to that hardware leaved behind… and is occasionally has some of the old spaghetti of BASIC in its structure.
Not very clean, to clean later… and later…
—
Still have a Compaq laptop running at 133MHz and 64 MB. Still fast. It used W95 but now uses WME in a LitePC Install from http://www.litepc.com/
Still fast and usefull… but NEVER has touched the internet NOR had any new istalations besides the original intended (compilers and MAME-064)
Even XP can be reduced to be efficient, But needing a 400MHz CPU and 256MB of RAM.
Compare new OSs with DOS (where drivers where used if needed). The predictions of Object coding failed when opened the door to fast and not planned programming.
It would be expected to have Libraries loaded according to the hardware, configuration and occasionally with demands. Did it happen? A bit.
But look at “other OSs” like http://www.menuetos.net/ (now turned to 64bits) and http://kolibrios.org/ (32bits and Menuet based). There are a lot of other examples.
—
It seems it is not as much what we do, but how we build it.
Edited 2010-01-09 00:06 UTC
Object oriented programming, which is what I assumed you meant by object coding, had more to do with development speed and robustness than high performance code.
Right! OOP was criticized for being BIG and SLOW.
Not really. It can be just the opposite and that was one of the reasons to be designed as it was.
If you use OOP planned as different libraries to be used in different situations… (just like using one set of routines for a CPU and another to a different CPU)… then you have several libraries available but just use what you need at that moment. Very low RAM consumption… and code as efficient as you wrote it.
The organization is very Tree-like. But not used as much as it should. People do not take the problem into account because it is argued that the problem is the code, not organization. The user should get more memory and a faster CPU so the team does not ALSO has to deal with organization.
However:
If such effort was made… we wouldn’t have so many dependency problems… and code would be faster.
If such effort was made… the team would not have to deal later with a mess in inter-library connections.
But it would also eventually break at another point… it always does.
The point is: OOP can be faster and smaller. People just do not care after the code shows to work.
C. Moore (creator of the FORTH language) showed how-to in the philosophy of (and tips on using) FORTH. I still believe that it is the best programming language training available. But that is a personal view.
I think the criticisms of OOP come from bad experiences with Java; so unfortunately OOP become synonymous with Java and all the crappy slowness that comes with the territory. With that being said, I’m a Luddite and I stick to my old procedural languages. I’m sure OOP is wonderful but I’ve never been able to get my head around OOP but that is probably due to exposure to Java via an O’Reilly book more than anything else.
That’s why you should look at object pascal – in Wirth family languages where pointers are already typed and you have ‘record’ instead of ‘struct’ objects make a HELL of a lot more sense.
In most of your AT&T Syntax languages (C++, Java, PHP) objects are such a convoluted mess they feel like they were bolted on last minute – not like they are an actual useful part of the language. In Modula and Object Pascal (like Free Pascal or Delphi) it’s an extension of the ‘record’ type and the handling/inheritance is a hell of a lot clearer.
Even if you are going to use them in other languages, I suggest taking the time to learn them in Delphi and/or Free Pascal first so you can figure out how they are supposed to work, BEFORE you waste time trying to use them in environments that have half-assed implementations.
Especially when said implementations are in languages that are already needlessly cryptic and likely all little more than a cruel joke.
Edited 2010-01-11 00:43 UTC
I’m still getting my legs used to C but I am tempted to learn Objective-C given my Mac background. I’d love to hear peoples experiences with Objective-C given that it appears to be a lot nicer syntax wise when compared to C++.
I consider objective C WORSE than C++ because it takes something already needlessly cryptic, and makes it even MORE cryptic. “Let’s strip out all punctuation, that will make it better!” – NOT. Let’s use a + sign instead of the word ‘static’, let’s remove the parenthesis around parameters passed to methods, let’s just throw everything on a line separated by nothing but spaces so unless you know what every element is you cannot even figure out what’s the object name, what’s the method name, what’s the parameters or even what’s being returned without counting the order from left to right.
Yeah, great system there guys. It’s sad when “simplification” actually makes it even more complex.
I agree. I really hate C++. It has all these great language features… that blow up dramatically if you actually try to use them in the way you would expect, for the task you think they’re for. It’s so easy to screw up inheritance in C++… but at least, when you do, the compiler’s errors are helpful. I mean, it’s perfectly clear that “missing vtable” means you forgot to declare a virtual non-in-line member method, right?
I learned Ada for a project recently, and basically thought, “man, this is what C++ could have been…” An exception mechanism, a rich and expressive type system, the ability to create abstract types with private data, a much more sane object and inheritance system… And all in a generally much more predictable and easier to grasp (if admittedly much, much more verbose) syntax than C++.
Now, if only we could convince the rest of the world that C++ is a tragic mistake…
XP doesn’t need “at least a 400mhz cpu and 256mb ram”… Well, guess it depends on your idea of “usable” :p
A stripped down XP ran fairly snappily on my old p75 box with 256mb of ram. used about 40MB on bootup
I had win xp running comfortably on a pentium 166 with 64 mb memory. By comfortably, I mean, I didn’t get bored waiting for applications to start, but your attention span may be shorter than mine. I think those were the specs. It was several years ago, and I had 5-6 computers around with similar specs, So it might have actually been a 233 AMD, instead.
Anyway, I remember having to put more memory in during the installation, then moving it down to 64 mb and having it run ok. Why? Need to test win xp at its worst, to see if our app would run okay. Cause that’s the stupid thing one of our biggest customer would do: Spend tons of money on stupid things ( office pool table, jukebox, soda fountain, Beer keg) , but not computers.
I rememember using standard WinXP professional on a DELL 8200 machine with 128MB RAM and Pentium [email protected]. It nearly killed me and almost gave me an ulcer. I had to use that shit for over a year. The 8200 Dimension used RDRAM and it was expensive as hell. So I could not upgrade RAM, stuck on 128MB.
Each time someone says that standard WinXP runs fine on a 128MB RAM machine, I get bad, very bad memories back. I dont wish this experience for my worst enemy, actually. I cant imagine how WinXP worked on 64MB, trying to do real work.
However, WinXP on a 256MB machine worked well.
On my old computer P3 550 mhz 64mb ram , I was installed Win Xp and It’s Runs that truth Slowly but was possible to Run Firefox and Aimp
For that old machines Bsd’s are interesting Options .
Edited 2010-01-12 03:30 UTC
Yes, as opposed to DOS most modern OS’ aren’t total crap. There was NOTHING good about DOS and I am continually surprised that people bring it out as some example of a good OS in some way. Who the hell want to go back to maintaining 5 billion different config.sys setups for different memory needs?
We are using them, not DOS
[p]There was NOTHING good about DOS and I am continually surprised that people bring it out as some example of a good OS in some way. [/q]
Well, DOS IS a reference to simplicity and problem solving by ingenuity. Simplicity leads to keep Problems (and Solutions) very clear and discernable.
That is (maybe) the point in DOS in comparison with today’s OS’s… besides GUI environment, naturally.
By today’s “needs” DOS is indeed a piece of outdated crap… but there’s something in it that is appealing, and (at least to me) is the reminder of something that can be changed… and that is small enough to be efficient.
I used {commo} as a COMMs terminal and “jabber” as a fidonet client… Naturally I still miss the convenience of deciding which editor to use in a GUI OS. Not because of the GUI OS but because the way applications are build. Does anyone feel the same?
Seems that with GUIs ingenuity and UNIX philosophy was forgotten. And so… Applications grow like hell. Charles Moore philosophy (creator of FORTH language) has always been the favorite beacon. Opposed is using tools as push-buttons toys.
Configuration needs seem to have nothing to do with simplicity but seems rather a fast hack to avoid solving induced problems. Nothing being perfect, availability seems more relevant than perfection. At least application can be used.
But the final cost grows more that the initial one that that was avoid for the sake of fast solutions (a bit like the old direct programming and debug later instead of planning). OOP could have solved the problem… that was expected… but didn’t happen. Except, maybe, in the AMIGA and MACs (and BeOS).
Looking back, we still live in a disguised DOS methodology, but without learning its lessons.
There were lessons: DOS being a one user OS pushed solutions much different that the UNIX corporate needs for a few applications. That is, maybe, why we now have all applications in one place and configuration files all mixed on another. It makes sense for the system but not to individual applications. Lessons seem to have NOT being learned.
And DOS comes to mind, even with all it’s defects and primitivism. Solutions are re-discovered again and again… almost as hacks to the problems induced by ignoring the old lessons. I suppose.
First of all, in fairness, DOS has been around forever, initially just meant as a quick CP/M-ish clone from scratch for 8086 since CP/M-86 hadn’t appeared yet. The main reason for its success was because it was there first, light and fast, easy to translate CP/M apps to, and of course because it was cheaper than UCSD Pascal or CP/M-86. (I’m paraphrasing, see link at bottom which explains way more than I ever could.)
The problems with memory management were only apparent much later. It’s the legacy (closed source) software that couldn’t be fixed that were the big problem. But those were the days where you had to fight hard to get things to work (instead of recompile, like nowadays), e.g. DPMI, VCPI, XMS, raw, conventional, HMA, UMB, etc. Multi-boot configs have been prevalent since MS-DOS 6 (and before, introduced w/ DR-DOS), so that’s a non-issue.
And just to be pedantic, there is nothing good about any OS, its only value is in the software, which depends solely on what people do with it. There have been many great DOS apps (IMHO), some still developed. :-)) If you prefer Minix or ELKS or old old Linux on your old clunkers, that’s fine too, but that’s a personal decision. 😛
(Tim Paterson [not me] explains it all):
http://dosmandrivel.blogspot.com/
Slightly OT, but how well does MAME work on that hardware?
Very well, if you use the command line… And games limited to the classics (from 1981 to 1986) !!!
Like Pacman, LadyBug, Roller and Pitfall-2 (forgetting a few)
Using 4DOS alias is always a big help, if you keep things simple and straight. There is an impulse to get thing more complex and loose control – just like it happens in OOP – though it is procedural.
I believe that is a tendency of the implementor, not a fault of the tool. You change the tool to enforce rules… but alway create new problems of unmanaged complexity. That’s an implementor fault.
We favor immediate problem solving and leave design as secondary. Naturally the opposite also happens with designers.
Edited 2010-01-10 00:59 UTC
If you look on youtube, there are tons of videos of Win 7 running on P2 and P3 systems that were posted months ago…
Yeah, I had the last RC running on a dual p3-600 with 1gb pc100 months ago. Actually ran OK.
Yes, you can run almost any OS on these old machines…
But what you’re going to do next? Run calc.exe and be happy?
You can’t use office and other productive tools in these machine with OS from this decade, anyway. Only stripped small Linux could do that, because you have the sources.
I compiled an entire Gentoo system on a 100Mhz Pentium with 64MB of RAM a few years back. It took damn near a week to install everything but it worked.
Edited 2010-01-09 00:41 UTC
lol, painful. Yeah, stuff like this isn’t that impressive. As long as the software will run on the instruction set the CPU supports… I don’t see the big deal. I’ve run XP on a pentium 75 (no mmx) fine, I run debian all day long on a p75 with 40mb of ram, etc etc..
It is fun to see how /well/ you can make a new OS run on old hardware, tho
i’ve done a netinstall over a 14.4 modem
Now that just sounds painful
I did a netinstall of FreeBSD 3.1 on 33.6 kbps – it not as bad as it sounds because of how granular the package selection is (was?) and because, once the FreeBSD base is complete – only about 10 MB, IIRC, you can switch out to a login terminal and have a very functional cmdline OS while the rest of the download/install is running in the background.
But.. on 56k (less phone line error rates). True, it’s not as bad as it sounds:
Install minimal system to get a clean reboot. Install selected packages and resulting dependencies in small chunks until your system is fully grown out to what you want. Even now I run a minimal install from the netinst CD or full install DVD then urpmi or aptitude install in chunks rather than a full dump in a single go. This habit developed quick after having a few dropped carrier or damaged packages bake the install.
(Running netinst over highspeed now, I don’t think I could go back to POTS line speeds and definitely not voluntarily.)
Floppy distros used to really impress me when I first discovered them. How fully functional a build can someone fit in 1.44 meg of storage. Damn Small Linux type stuff providing a fully graphic build in minimal space and resource needs or busybox type install is probably the evolution of that.
The curiosity and challenge is pretty much the same though the goal is size of stored image rather than age of bare metal host.
I’d just like to see more people doing howto docs once they manage to strip it down that far. Digging through a month long forum thread for that same information can suck when your trying to replicate the experiment. The Black Viper website is invaluable for the related information it provides.
Oh man, I used to use floppy distros all the time. Like tomsrtbt (http://www.toms.net/rb/), 2-Disk X (http://freshmeat.net/projects/natld/), etc.
Did you ever use the QNX4 boot floppy? I ran it for awhile on a 386-sx 25mhz machine with 12mb ram, ne2000 10mbit nic, svga card. It’s QNX with a full gui, web browser, etc. Quite impressive. You can get it here (The originals and customized ones) -> http://qnx.projektas.lt/qnxdemo/qnx_demo_disk.htm
Give it a try, it is really amazing what they crammed on a floppy.
Edited 2010-01-09 16:05 UTC
At that point, floppy distros boarder on art rather than engineering to a simple spec. I’ve rememberer them as an option but since the liveCD and USB flashdrives, I’d somewhat forgotten the specifics. I can’t remember what floppy image I did try back in the day now.
This discussion did remind me of Minuet which hsa it’s own VM in my collection now and I’ll be taking a tour around the still surviving floppy distros starting with your two links. They’ll go into the OS collection also and you never know when one may come in handy.
Don’t forget to try OctaOS, it’s very good too.
Too old, I’d prefer BlueFlops Linux (two disks) if I really needed something like that.
What do you mean by everything? I remember doing a stage 1 install on a atholon xp 1900 that took a good week for everything ( up to and including open office, kde, gnome, xfree86). A week seems awfully fast, actually. KDE alone took over 24 hours.
I meant everything I wanted on that machine, which meant no X. I’m not that masochistic! It was pretty much a base install only. I think I installed dancer IRCD too before I found a more powerful PII to waste my time with. The PII didn’t fare much better but it worked well enough for IRC.
I built Gentoo on an Amiga, that took an unbelievably long time…
It can take quite some time to compile on embedded arm boxes too…
The beauty of gentoo is that it will run on almost anything… Distcc and ccache are your friends too. I would setup a cross compiler for the amiga if i had to compile on it regularly.
So I guess all of us nix fans are hardware fans as well and in the meantime the windows monkeys don’t care just wasting their money on the latest and greatest.
I remember the 3 hours linux kernel compiling time on P100MMX lawlz so you read all the options twice before you started. Anyways those boxes were still good for homie routers and after 13 years yeah I still could use them to NAT 100mbit.
Whats the point installing winsucks on old boxes anyways when I only use that shit for gaming. I’m surprised that win7 even allows to be installed on such a machine when old shits like win95 98 xp say that your hw is obsolete and don’t install.
So the difference is that you can still use an old box with nix for useful purpose but with winsucks you keep watching the f–king hourglass for 10 minutes whatever you do.
My server is an old 350 MHz G4 w. 512 MB ram, and it runs Leopard server just fine. It runs MySQL, file sharing, bit torrent downloads, and an iTunes server. Had to trick 10.5 into installing, but its been running now for about a 2 years with the only reboots for OS updates.
Other than Vista, most of their OSes aren’t really /that/ hard on resources. Or they can be had to use less, at least.
Speaking on OSX, I was pleasantly surprised with how well 10.5 ran straight after a fresh install on my old g4-933 with 768mb ram. I know a few people running it fine on G3s, as well
You make it sound like Windows 7 is a completely new OS. It’s basically Vista with some minor speed enhancements.
Vista was fixed by SP1 but I think everyone was having too much fun bashing it to notice.
XP vs Vista on an EEE PC
http://www.youtube.com/watch?v=EXw7v1bxpSs
You could say the same thing about 98 vs 95 and XP vs 2k.
And Vista is still fairly hoggish even with sp2. It wasn’t “fixed”. It was improved. Windows 7 does vastly improve upon it.
I said fixed for a reason since the early release of Vista had problems that needed to be fixed, not improved since there were issues that were clearly problematic and needed a drastic change. OpenGL and file copy performance are two areas that come to mind. However those issues were fixed by SP1, not SP2.
You can use descriptors like “vastly improved” but I follow benchmarks and so far I have seen Windows 7 to be mostly an improvement in areas like startup and shutdown times and a slight improvement in battery life.
I hate to say it, but Win7 has some of the same unfixed problems as Vista. Unfortunately, nobody cares.
Seriously. It just is. That hardware is as good as useless unless an old OS (such as an old version of Windows it likely originally came with) or a specialized or command line OS is used. And even then, its use is severely limited by things such as the higher disk space demands today and slower components inside. I would imagine that a feat like this would be completely unrewarding, considering you could do virtually *nothing* with the end result, but apparently not to some people…
Honestly, who really even cares enough to be impressed by such a pointless accomplishment?
Edited 2010-01-09 01:09 UTC
We’ve all learned an important lesson which is that new operating systems run slow on old hardware.
Tomorrow I’m going to connect to the internet with a simulated 14.4 kbps connection and blog about how much it sucks.
I do that everyday, I’m stuck with Comcast! They sell me 3MB service, then after they hook it up, whoops, I only have 1MB service, which routinely is down around modem speeds. But hey, its comcastic!
It’s interesting to some people here that Windows 7 *can* run on such an old machine. Given it’s been called a leaner OS than Vista, it’s interesting to get a feel for how lean. It’s not revolutionary, it’s just a bit of geeky news – but this is a geek news site.
I hear some folks climb to the tops of mountains even though there’s nothing of value up there!
You know when beta testers are told to push the hardware/software to the limit and do what most other people won’t even imagine of doing? You discover (like me) all sorts of features and results that would hitherto been undocumented.
I for my part, have an FIC PA-2002 motherboard (Socket 7) with a Kingston TurboChip K6-II 400MHz accelerator, that will soon get an Evergreen AcceleraPCI PCI-card accelerator (Cyrix 600MHz) that will bump up the RAM from 128MB to 256MB, as well as an ATI Radeon 9250 PCI to be delivered from an eBay purchase, on which I will test whatever newest OSes I can get my hands on.
Some people collect stamps and feel very much rewarded and entertained but the process and outcome; who are you to criticize over not being a challenge your personally interested in?
Give it more ram and an agp/pci video card that’ll handle composition fine and it probably wouldn’t be too bad to use every day :p You’d be amazed at how snappy an old slow machine can be if most of the UI rendering is offloaded to the GPU.
Yeah, until some tweakhead comes and disables GPU GUI and indexing and prefetching and enables arcane Win 95-style *tweaks* “cause it’s way faster”. I always LOL at those folks.
I’m sure that there are a good number of “tweakheads” that go to the “classic” UI in Windows not because it’s faster, but because by comparison it looks halfway decent (and much more familiar). Not everyone wants every window to be shiny and transparent with distracting glassy effects, sometimes they just want the GUI to stay the hell out of their way so that they can actually work faster and more efficiently.
Then again, IMO, since Aero Glass was introduced, the changes made to the UI even made the “classic” view hideous and unworkable… and I’m not too crazy about Aero either.
One will get some freed resources out of stopping the Theme service outright but that’s different form simply selecting “classic” theme instead of the raw unthemed widgets. It may not double the responsiveness of the OS but it means more ram for what I’m doing not the OS it’s running on top of.
lol, yeah. A lot of the “tweaks” either don’t actually do anything or are detrimental. :p
I do prefer the classic skin in XP, tho. I’m not overly fond of Aero either.
Edited 2010-01-09 02:33 UTC
As with anything, if you don’t know what you’re doing or why then you’re likely to do stupid things. OTOH, of all the myths you could have chosen, you picked examples that are, frankly, wrong. Aero uses quite a bit of CPU as well as GPU and Windows will disable it to improve performance. Prefetching is similarly disabled on SSDs in Windows 7. And I don’t even know how you could think that indexing improves performance in Windows. If your argument is that it’s not a detectable detriment, then you may have a point, albeit a very difficult one to prove that goes against most people’s experience.
Why disable prefetching even on SSDs, really?
Yes, my Intel X25-E is ridiculously fast compared to a regular harddrive… but my RAM is ridiculously fast and makes the X25-E look like a snail.
Similarly, while I don’t use Windows’ search and it’s indexer, I still use another indexed searcher (locate32) – because even though the X25-E is fast, indexed searching is *way faster*.
I know that indexing doesn´t improve performance.
But then, I believe indexing does improve MY performance. If you turn off every bit of cleverness that is built into current OSes, you deprive yourself of a great deal of clever functionality that is designed to make YOU faster. I´m not telling everyone needs them, but it´s downright stupid to disable them without even knowing what could be their benefit.
What is more important – windows opening half a second faster or me being able to find anything in half a second? For me, it´s the latter. YMMV.
It is down to use patterns. There are people how are customed to use search box to find things. But then again, there are plenty of people who keep their files organized, know by heart what lies in their documents/music/video directories and therefore are used to access their files directly and never touch a search box. Thats because the file index is in their heads and they try to get the OS out of the way in the means of automated file organisation/indexing.
Edited 2010-01-13 08:56 UTC
After seeing how it works on my old notebook, I’d certainly try it, if I had much slower crap around. Nothing wrong with trying to test limits!
I’d be surprised if Windows 7 works on anything older than a Pentium II or a Pentium Pro because I think some parts of the kernel may depend on instructions that were absent in P5 cores. It certainly hasn’t been tested on anything quite that old.
A lot of people probably weren’t aware that Win7 can run on this obsolete class of hardware. One of the reasons that it’s interesting is that there are regular claims about how Win7 is “bloated” or a “memory pig”, and this serves as a counterpoint to demonstrate not necessarily how well it runs on old hardware, but the fact that it CAN — and Microsoft deserves some credit. Anybody who thinks that Microsoft isn’t optimizing for smaller form factors and smaller processors is kidding himself. Netbooks are a huge growth area, and Microsoft is clearly aware of that trend.
If you already saw this somewhere else, great. Whatever. Pat yourself on the back, and go back to whatever anti-social geek behavior you were engaged in.
Edited 2010-01-09 10:08 UTC
a friend told me a few years ago (2005 i think) that it installed Windows Longhorn on a P4 at 1.7 ghz with 256mb of ram, and it took 6 minutes to load, so i dont see why you should try to install win7 on a pentium 2. If it won’t be usable why waste your time?
Yes.. because pushing boundaries to discover new limits in any field of interest us just stupid right?
Indeed, how about Ubuntu?
We all know that Linux runs on the smallest gadgets around, but in my opinion the mainstream desktop Linux has gotten so fat that Pentium II is a wild dream.
There used to be a saying that “Linux runs on old computers”. It is a bad joke now, unless you want to follow some weird special-purpose distribution that is nothing like Windows 7 (which I haven’t even tried, but still).
Win7 was specifically configured if not outright modified to get it squeezed into such a small chunk of hardware. Why would you compare a specialty configuration against the default Ubuntu and assume you had any relevant basis for opinion?
For a relevant comparison, you might want to think about Damn Small Linux and the other small resource distributions or consider a custom selection of the Ubuntu packages. I’d personally ignore Ubuntu and use Debian proper for such a project. A nice light window manager and one should easily be able to match the performance of win7 on the given hardware.
Really though, this isn’t a pissing contest over who’s daddy can squeeze into a smaller shoe box. It’s about comparing a single OS against newer and older hardware.
I’m surprised it took this far down for that particular mudball to finally be tossed though.
Edited 2010-01-09 14:56 UTC
There was a Microsoft-sponsored competition called “Surkein seiskarauta” (“Crappiest Windows 7 Hardware”) on a Finnish demo party / LAN party / computer festival called ASSEMBLY Summer 2009. The “winner” of the competition was a Pentium II at 133 MHz, with 80 megabytes of RAM and a 6.3 GB hard disk. The system was said to have been found in a friend’s cow shed.
For more information, see http://seiskamunkki.spaces.live.com/ (mixed English/Finnish) and http://www.itviikko.fi/ratkaisut/2009/08/13/windows-7–rauta-loytyi… (Finnish only).
Damn Finnish cows and their world domination plans.
If you have old PC, you can convert it to thin client and run Windows 7 by using software such as ThinServer
http://www.aikotech.com/thinserver.htm
I do it the other way around myself: I’ve got an old PC sitting in the closet, I’ve got NXserver running on Mandriva there, and I just connect to it and set it to download new Linux DVD ISO, compile something or whatnot whenever I need and use the main PC to do more pleasurable tasks in the meanwhile
On my office as file server (mainly), MySQL, Apache and PHP we use a PII 200Mhz 128MB of RAM with XP Prof. After a SCSI crash, I urgently needed to replace the HD with a new OS. That was the solution, about 6 years ago.
Zen said: “If it works, don’t change it”.
Obviusly, we have to upgrade it, but isn’t really urgent.
Don’t you mean PentiumPro 200? Or a PII 233 MHz?
The minimum a PII was released at was 233 MHz, and it ran as high as 333MHZ (then the PIII took over and went up to just over 1 GHz).
Sorry.
You are right. 233 MHz. Those type of procesor cases that seems VHS tapes inserted on the motherboard.
The fact is that it works pretty well so don’t need to work on it often. Anyway, I admit that PHP/Apache works very slow.
As others have said, the hardware is not out of date or beyond usability.
Personally, I still use my PII/233 w/168 MB RAM for my home server (Gentoo Linux). Until about 2 years ago, I was using a P90 w/48 MB RAM (Slackware); both provide the same services: Email, DNS, DNS Cache, DHCP, Firewall, Samba/PDC, Apache Webservers, CVS (P90)/SVN (PII), and more.
Both could also easily serve a SMB up to around 100 or so employees.
Considering how bad of a state MS had gotten with their code-base with XP and Longhorn 2003 (the one that was abandoned by MS before the one that became Vista), it’s quite an accomplishment – of course, so is MinWin and the Server Core series. If they did the job right, then there’s no reason Win7 shouldn’t run on a P90 as well.
Still, it probably won’t have the performance of a Linux system – and yes, I have run KDE3.5 on my PII (before it became a server), and it was decently snappy.
Trying to get windows 7 on a pentium 1, well, good luck.
I don’t think the pentium 1 part will be the problem. but the BIOS more likely will be. Unless you have a ‘recent’ pentium 1 mainboard. Windows Vista, so I assume Windows 7 too, will refuse if your BIOS doesn’t support ACPI well enough. As far as I know, this ‘support detection’ is based on the BIOS date. As I have a Socket 7 ATX mainboard, that, on windows 98 supported ACPI, refused to boot the windows vista installation cd with the message that ACPI support was missing.
Maybe with some tempering with the files or BIOS, changing that date, might trick it into passing that test. But I haven’t bothered so.
LinuxBIOS should help get around that, at least if you have a supported motherboard.
That said, most newer OS’s for the most part ignore the BIOS after the initial kernel load and favor direct hardware interaction. ACPI is probably one of few BIOS calls they all still use – namely due to the its use in the hibernate/suspend/resume process than anything else.
That might work. But… LinuxBIOS? It got renamed to CoreBoot a long time ago. But CoreBoot with SeaBIOS could do the trick, I suppose.
True. I just never remember the new name – LinuxBIOS is what stuck in my memory.