There are so many ways to boot alternative OSes to common PCs these days. What’s your prefered way? Read for a quick introduction to the most common methods and then let us know about your prefered way of booting alternative OSes to your PC by taking the poll.There are many ways to run multiple operating systems on a single piece of hardware. Below we count the pros & cons of the three most popular methods: re-partitioning, emulation and virtualization.
1. Partitioning
With the re-partitoning method the user must manually re-partition his hard drive and then install the OSes one after the other on the right partition and then use a boot manager to boot between OSes.
Pros:
– OSes run full speed
– Full access to the hardware
– Partition resizing is possible (depending on the file system used)
Cons:
– Manually partitioning can be tricky for newbies
– Frustrating if you ‘lose’ your boot manager after an OS update
– Requires a full reboot to run another OS
2. Virtualization
Virtualization is the new kid on the block and it’s gaining ground very fast. To the eyes of a simple user it looks a lot like straight emulation, but in reality it’s not. The virtualizer “shares” more hardware resources with the host OS than an emulator does.
Pros:
– Slower than the re-partitioning method but much faster than emulation
– Support for all host hardware, including 3D support
– Virtual clustering made-easy
Cons:
– Requires enough RAM
– Only runs on the same architecture as the host OS
3. Emulation
Emulators will completely emulate the target CPU and hardware (e.g. sound cards, graphics cards, etc). Emulators are the “old way” of running multiple OSes on a single computer. Emulation on PCs these days is only good for non-OS usages (e.g. game consoles, embedded systems) or specific OS/CPU development purposes.
Pros:
– Best solution for embedded/OS development
– Doesn’t interfere with the underlying host OS
– Can be ported to any architecture
Cons:
– Can be very slow
– No 3D or other exotic PC hardware support
– Requires enough RAM
Being a traditional geek chick myself, I still prefer the manual re-partioning method for my PCs (I like the clean nature of it), but for a MacTel I would much prefer Boot Camp’s special partioning scheme (if Vista, Linux are supported properly — otherwise, Virtualization is my next best option on MacTels). Tell us what’s your prefered method is below!
Note: Javascript is required to both view and vote for the poll.
I use Linux and Windows on my laptop and on my main pc,
for me, boot time is no issue, atleast on my laptop.
Laptop ( 1,6 ghz turion)
[Cold] Boot time till windows (xp sp2 home) welcome screen – 30 sec
[Cold] Boot time till gdm login (gentoo) – 26 sec
on my Main Pc [ a 6 year old amd 800]
To welcome screen (xp pro sp2)- ~1min 5 sec
To gdm (gentoo also) – 44 sec
I prefer coLinux on my main pc though, but networking is imposible to setup , and it often disconnects for no good reason.
The XP laptop I use for work usually kicks my personal Linux box’s ass at booting to a usable desktop. My wife’s XP box beats my linux box to the desktop, but isn’t usable for a good 40-50 secs. after the desktop show up while it loads up every peice of shit waste of space taskbar-ware in the world. So it’s hardly usuable till till long after my box is up and going. All three are pretty equal in hardware specs. Just wanted to point out that there’s a lot of configuration variables that you are overlooking.
Oh and for the poll, I boot gentoo (single OS) using lilo. I’ve got an extra hard drive, but no time to test other OS’s out on it.
About XP – yes, it loads my antivirus and some bt aplications… and thats it, so its usable after 8-12 sec after i hit enter, with all themes and other useless stuff disabled…
About Gentoo – it uses init-ng with minimum services enabled, and my gnome desktop is usable after 6-10 sec, after i hit enter @ gdm.
so if you like, add those seconds to my boot time,
as i said, boot time is no issue to me.
Nothing wrong with good ol partitioning of the hard drive. I partition my drive with a fat partition, two large data partitions and then about 6 ext2 partitions before I install any OS and then I never think about it again. As far as bootloader, most of the time grub will pick up everything and set it up correctly but I also keep grub4windows on my windows install and call it from boot.ini just in case. So no worries and no surprises ever since I have done this which I started doing this method about 4 years or so ago and countless distros and versions….
Sounds like a good setup you have there. Nice.
What is grub4windows? I’ve never heard of that.
Not as much ‘preference’ as it is ‘easier’. For some reason I find it easier to partition my HD than to get a virtual copy of my target OS+software to run it. In most cases, it’s the cheapest way, considering a lot of free emulators/virtualisers aren’t feature complete/bug free, or otherwise run dogslow (think: Bochs).
VMWare Server 6 is both excellent and free.
To run any application I want from any major OS within Mac OS X itself.
No need to boot anything else.
If I would need to run Windows or Linux for some coding reason, then partition scheme.
…is actually my preferred method, followed by multiple partitions on a single drive.
Yep, I agree with that. I swap hard drives on my Inspiron 1100 laptop. Fedora and Windows. Separate partitions on my Desktops(Xandros and Win 2000) and (Knoppmyth and WinXP).
Mostly because of portability…
Maybe it’s a matter of taste, but I don’t really see a clear distinction between paravirtualization and what’s called ‘kernel emulation, e.g. coLinux’ in the poll. Could anybody explain the difference?
Edited 2006-04-10 22:04
paravirtualization := modify OS so that certain operation are actually hypercalls. Example, modifying the page table is a hypercall that traps into the hypervisor, which modifies page tables on your behalf. For paravirtualization, you actually are running a real kernel.
kernel emulation := trap system calls and route them to another program which emulates the system call. Often happens entirely in userspace, with no kernel involvement.
WINE, for example, performs kernel emulation: Windows system calls are trapped and emulated. You never actually run a Windows kernel.
kernel emulation := trap system calls and route them to another program which emulates the system call. Often happens entirely in userspace, with no kernel involvement.
coLinux is not a good example then – it needs patched kernels, on both host and guest system (well – actually a device driver on the host, as it’s running on Windows).
I use Parallels (similar to VMWare) because I often need both OS’s running. I am a Window’s developer, but my main desltop is Linux. I prefer the stability and security of Linux as my base OS< and I fire up an XP image when I need to do .NET development. ALso, sometimes I am accessing MySQL and Postgresql database from the Windows image to my Linux OS. Otherwise I would have to install MySQL and Postgresql under Windows also. HOWEVER, when I want to play games, I have to boot into a real XP partition I keep around just for this purpose.
Just to add my thougths: my favourite is definitely virtualization, and mainly because of a few imho important points you forgot:
– ability to suspend & resume the virtual machine at any time
– ability to take system snapshots or similar
– ability to move the vm from one physical machine (say, your office station) to another one (e.g. your laptop)
… all of which apply to emulation as well.
There are two computers siting under my desk : one loaded with Windows 2000/SuSE10 and another one with Windows98/Fedora 5.0 interconnected through ( cheap Belkin) KVM switch.
Nothing works better than partitioning hard-disk for dual boot systems ( especially on slower CPU as mine are Celerons 400 and 500 MHz) and where vistualisation is no go.
I agree partitioning might be a tricky for inexperienced users but latest distros are really good at that ( automatic disk partitioning).
As of lately I can see lot of articles talking about virtualisation and/or emulation but I definitely think it’s just sign of more profound turbulencies in operating system landscape. People today ( or better yet well informed computer users) are aware of other operating system existence and more open to new chalenges than before. Personal computers are commodity today and something very afordable so no wonder we can see just more and more users who are running two or three computers in their homes.
Some of them are simply reluctant to abandon their good working but technologicaly obsolete OS-es, other are just testing waters with their second operating systems ( Mac,Linux,Unix).
Yeah! The article didn’t give the option of multiple PCs and a KVM switch, but this is my prefered way to boot alternate OSes too.
The main problem with the KVM switch is that none are perfect. I have a mix of Analog and DVI video. And there’s one PC with a burned out PS/2 mouse connector, so I have to use a USB mouse on that one PC–that means two mice and I often grab the wrong one.
But by keeping everything separate you are less likely to get in a situation where screwing up one environment impacts another.
Going one step further, I like to run a VNC servers http://www.tightvnc.com/ on everything that runs it. This even obviates the need for the KVM.
In my case it’s a little more extreme — I have an 8-port KVM with seven boxes currently connected to it running a number of different OSes (OS/2 Warp 4, Windows 2000, Windows 95 OSR2, Windows XP Pro, DSL 2.0, BeOS R5, and Mandrake 8.2). Most of them are using a 250GB Buffalo LinkStation as a common file server.
There’s no need to reboot when you can just change the channel. 🙂
(title says it all)
I was thinking about this the other day and remember this story where Apple was going to include flash memory into their machines to make bootup really fast. (see http://www.appleinsider.com/article.php?id=1442 )
Now what if we combined that with a ‘fast user switching’ type setup that instead of running Windows in virtualizaiton, actually just suspended OS X, did a soft reboot and booted up Windows using the Robson technology and then when you quit windows, it would suspend windows and bring back OS X.
This would let you use full hardware acceleration, not have to worry about a VM running things and would be very fast.
I still would prefer to use a Parallels or VMWare but I think this is another way they might go about things.
That is a fabulous idea, and it could work even on computers without the Robson technology, simply by suspending to disk. Why didn’t anyone think of this before?
Don’t get me wrong, virtualization should still be in there as a provision for users that don’t want to leave OS X, but for gamers this would be their best bet.
Ideal situation: The Mac user is able to easily install Windows into its own partition, and this partition is accessible through virtualization or dual-booting, but either way, it’s loaded from a state of hibernation, not from scratch. The same goes for a reboot from Windows into the Mac OS–it should load it from a state of hibernation.
If the computer is new and contains the new “Robson” flash memory cache, it should use it, but if not, it should rely on hard disk space. Unfortunately, Mac OS X currently has no provisions for hibernation/suspend-to-disk, but that could change with the next big release…
Brilliant!
with most alt OSs running on pretty low end hw, and the cost of hds etc so cheep, my prefered method is just to install each os on a dedicated box.
live cds are also a realy nice way of getting the job done.
but then again, i do also dual boot (multiple ver of BeOS on my main box )
I used to use the closed source boot manager, System Commander. Great features, great compatibility. But the company, V communications was sold, and the product is not as well supported. It was a great product.
It bugs me that the OS runs slowly in VMware, but that is more than made up for (IMHO) because:
1) I don’t have to save my work, close all windows, and reboot
2) I don’t have to worry about messing up my boot manager or other partitions
3) I can have multiple OSes paused in VMware. Seriously, that is the coolest feature.
4) I don’t need to repartition and reshuffle things on my hard drive just to try out a new OS
5) I don’t need to burn CDs. Just read right from the ISO.
I could go on, but those are the big ones.
“Good old re-partitioning method with a boot manager”
I usually do that whenever I feel like trying Linux for the hundredth time. Or these days, a live cd.
I already have an iBook I used about 10 times with an airport that sit’s in a corner in it’s original box that nobody wants to buy. Guess nobody wants macs anymore or something, I offered it cheap.
“Guess nobody wants macs anymore or something, I offered it cheap.”
So, what do you consider cheap? And for that matter, what generation is it?
As for the poll, I also do the hard drive swap thing. It’s just clean and easy, and almost as fast as dual-boot. The only downside is no common storage, unless you count my thumb drive.
As mentioned by a couple others here – I actually use separate HD’s entirely. I have various drives ranging from 500mb to 60gb lying around that I format and use as utility for testing alternative OSes.
The beauty here is that I can swap them from machine to machine (which I have various configurations for testing) without really removing the primary OS (dedicated HD) from those machines I test on.
The drawback is that I generally have the cases torn apart 95% of the time as I hate most of the removable drive bays available.
I also keep 4 of these machines on a KVM at any given time to quickly swap between them as I’m testing.
Edited 2006-04-10 23:26
Well I do alot of the same thing you do, open frame etc, slightly changing hardware on the fly.
On the small hard disks, I used to buy those as a way to get drives on the cheap back when new ones used to be much more expensive, but I also found that they are incredibly slooow, often down to 1MByte/sec transfers at full speed at the 500MB level. Now I just stick to newer drives with larger caches, although the 128GB crossover is yet another stupid hurdle for BeOS or my older mobos.
On the small hard disks, I used to buy those as a way to get drives on the cheap back when new ones used to be much more expensive, but I also found that they are incredibly slooow
Yes, I agree that the older drives are slow. But they’re mainly for testing out alternative OSes – not necessarily using them extensively.
For more serious work, I use the newer larger drives – like a spare 60gb with 4 partitions.
Personally, on my work laptop, I use Win2K with VMWare for my RHEL4 ES OS. Not to mention, my desktop PC @ work is WinXP with VMWare + RHEL 4 ES. Hopefully, when my new laptop HDD comes in, I’ll have it the other way around.
It really kicks ass when I need to configure something for Linux. I just punch up VMWare, do what I have to do, pull the configuration file, put it in production. Done! No need to look for another server to fiddle around with, plus the reverse is true. If a package needs to be udpated, pull the configuration (or whatever), update it in my VMWare station, test it, give it the green light (if it works) and back into production.
My personal server at home, Ubuntu.
My gaming rig, partitioned 3 ways: Vista / XP 64 / RHEL 4.0
VMWare works with anything I’ve thrown at it. Virtual PC for the Mac OSX works OK, but AWFULLY slow. My .02 cents.
I actually partition my disks. I think it’s not too painful (or painful at all) when you get used to the system. It’s fast and it keeps everything pretty spread out which is kind of nice when you’re dealing with something like OSes. I find it practical, but kind of odd unless you can really justify it (like you need to use Windows but want Linux or vice versa). I’m kind of surprised how many people use virtulization, though. ~25%+ seems to be the result thus far in the poll.
For me as of now, though, I’ve only got one real OS running: GNU/Linux (Arch Linux, in particular) which I’m happy with.
>I actually partition my disks. I think it’s not too painful (or painful at all)
Have you actually asked your hard drives about it?
Good point, good point. Well I think it’s happy, personally.
I make use of both partitioning and virtualization. My current system is a triboot Linux (Debian), FreeBSD 6, and Windows XP.
I prefer to work under FreeBSD, but much of my work requires Windows. When I need to work under Windows, I use VMware Server to boot FreeBSD and Linux from the partitions and run them in a nice, virtualized fashion. VMware also comes in handy when I’m testing my hobby OS, or running various other small OSes that I like to use sometimes.
I used to do run them on seperate disks.
Now I mostly use Qemu. My disks are full. lol
I use a combination.
* Emulators – especially for DOS-applications (games, actually, run in DOSBox). Occasionally I’m using Virtual PC – mostly in order to run live CDs without rebooting.
* Real installations – used for Gentoo Linux, Syllable OS, Windows XP and SkyOS.
* Live CDs – used for various *nixes as well as eCS, AROS and a few other systems.
I do the partitioning method for the reasons listed:
Pros:
– OSes run full speed
– Full access to the hardware
Being the average computer geek without the high end hardware I like to get as much speed as possible out of the system so I can see what the OS is capable of. That is my preferred method, but if I don’t want to reformat a partition for a specialized OS I’ll download the live cd iso and then run it in qemu. Or see if they have a vmware image.
I partition my 80gb drive into 5 parts, before installing any OS, then I use grub as the default bootloader.
I just run the OS I want to try out on another workstation.
What I’ve done is scoured ebay for bulk-buyers and sellers. These are sellers that buy lots of systems off-lease or from companies that are upgrading and getting rid of their old systems. Often they will have a lot of, say, 50 of the same machine. Sometimes these are refurbished, which can be a good deal since most refurbished machines have been looked at and tested (in order to refurbish them!)
Often you can get fantastic deals on these machines. I feel better about buying ebay boxes from these dealers since one assumes that, first of all, the machines are probably coming from a business environment, and secondly, because often they are certified to work (checked in a cursory way). Bigger bulk sellers will have short warranties (usually 30 days) so this further takes the risk out of it.
Buy a few of these for $100-$200, put whatever OS you want on, and then hook everything together to one monitor via a KVM switch and, as in my case, NFS and Samba.
I have:
(1) Windows machine I built myself a few years ago. This is for Windows apps I need to run like Adobe Premiere or Microsoft Streets & Trips
(2) TigerDirect kit I assembled which is my Gentoo Linux desktop – what I’m typing on now and my “main” machine.
(3) Router running Debian. This is a Compaq Deskpro I bought from ebay for $100.00 from a bulk seller. It still has its Enron inventory sticker on it (and is, by extension, a nice conversation piece).
(4) File server / dev server. This is a Compaq Deskpro EN which I bought for $125.00 from ebay. This has 4 hard drives in it and, like the router, has been completely reliable.
(5) A Dell compact desktop unit which I have FreeBSD installed on. This was $150.00, again from a bulk sale.
(6) An old IBM Thinkpad, I bought for $225.00 a few years back. This runs Debian Linux now but has also run Gentoo in the past.
If you add up the numbers here, that’s not bad for a full spectrum of machines. None of them have been DOA or had any problem because, again, I bought from a bulk dealer (many of whom had thousands of comments/ratings from previous customers) rather than individuals.
I find this solution to be ultimately more satisfying than re-partitioning (Rebooting is a pain because every one of my systems serves *something* which becomes unavailable if I shut down the OS). Occasionally I will run a VNC client to work with the Windows machines, and usually just use ssh to work on the other machines, but can also flip around on the cheapy Linksys KVM switch if I want.
I multiboot the alternative operating system (Window) on my laptop.
I have my drives partitioned and use Acronis to manage them. It works well, handling everything from OS/2 to SkyOS.
to have partitions to run the OS at Full-Speed-Hardware… Virtual Machines are Slows..
I also go with multi PCs with good 4 way KVM even if it ain’t very clever, its really can’t fail. Reason, most OSes have one or more issues anyway with trying to get one system be everything to all OSes and it lets me use older systems that still work fine, they are not all on at the same time anyway.
I will be following the Apple story for virtualization though to see how well 1 machine can serve many OSes.
I use repartitioning for most of my multi-OS needs. I hasve a USB key that I use to share data between the OSes if I need to. I use eComStation as my main OS, but if I need something quick, I use Virtual PC. I have several OSes installed in it, but partitioning is still my favorite.
Before long, with some of my newer machines being able to boot from USB, I may have a few RocketPods stacked up and install to those so that they have no chance of really messing up any of the others should something happen, which Windows is good at doing.
. I use eComStation as my main OS
Wow! How does one end up with eComStation as their main OS? (Just curious, not to slight OS/2 or anything! just how did you get there!
Alas, I do both. I have Windows and Linux partitions on my laptop, and dual-boot them, but I have also been known to crank up my native Windows partition in VMware on Linux. If virtualization were complete, a la IBM mainframes, I could do away with the Windows partition and run everything under emulation, but as good as VMware is, it still emulates many of the PC’s devices, so I have to native-boot Windows for some devices I have that only have Windows drivers.
I’m still trying to decide what to do with my home/server machine: build a honking box and virtualize several OS instances on it for different things, or do the eBay thing and get a cheap PC for each task.
1) Partitioning
2) bootable CDs (or floppies)
my solution is;
dual boot for my work notebook (windoze xp/ubuntu 5.10)
only ubuntu 5.10 for my daughter’s desktop
windoze xp and livecd ubuntu 5.10 for my wife’s sony vaio
all data on FAT32 formatted external USB 2.0 80gig HardDisk
I’ve got 7 partitions on my single 80GB HD. One for Windows, one for FAT32 data (mostly games + videos). One for Linux, and one for data that I use in Linux, mostly the source code of my own OS, Cefarix. One other partition is the swap partition. Then there is another partition for Cefarix OS, and I keep another partition free for transferring raw data sometimes between my OS and Linux (since my OS can’t write into ext2/3 or FAT32 yet). This free partition might also be used for testing new OSs, although the last time I had another OS installed it was ReactOS, and that was installed into the secondary Windows FAT32 partition.
KVM and/or VNC work nicely to clear up desk clutter.
I don’t see an option for partitioning. I guess it’s option 6, cuz that one has no text next to it?
Virtualization is the most feature rich way. Dual booting is not just about boot time, but also about having to stop and close everything and do a shutdown, which I almost never do (I always suspend or hibernate).
“I have Windows and Linux partitions on my laptop, and dual-boot them, but I have also been known to crank up my native Windows partition in VMware on Linux.”
I’ve been dreaming of something like this, never tried it though, because my hardware wasn’t that good. I also thought that it is not possible to run the same windows installation on two machines (the real one and the virtual one). Do you have a trick to make this possible? Or did you mean that you have two seperate windwos installations, one virtual and one you dual boot into?
I use the same Windows partition, with two different hardware profiles, for both. VMware calls this a “raw disk”. Not as handy for moving your Windows installation around, but I don’t have to keep 2 copies of Windows and accessories this way.
ahm, what was the vote option above bootcamp? Is it supposed to be none or the “above”? I saw that it got quite a few votes…
I use LILO to boot Windows, Linux or LoseThos on one computer.
On my other computer, I’m stuck because I have just a recovery disk that places Windows on the computer with a recovery partition. I fool it by placing LoseThos on the recovery partition, so I have the option of booting Window’s or LoseThos.
OOC, I found http://www.justrighteous.org/ , but there doesn’t seem to be much info on there as to exactly what LoseThos is… Google comes up with VERY few links to sites even mentioning this, including one comment in a relatively old osnews article… the TudOS one I think, BTW: that many virtualized instances of linux in the screenshot was just sick… }:) sort of reminds me of the emulated OSes all running at one time on macs that were in vogue a few years ago…
From what I can see, it would appear to be a barebonesish OS with graphical support directly booting into a sort of development shell? (Sort of like squeak maybe excepting a C/C++ish syntax?)
I use partitioning method and dualboot using lilo. This method works fork me as i dont have big need to switch OSes often. And when i’m on windows and want to do quick job on unix there’s always PuTTy icon around…
Anyway, when i buy myself new box a might give VMware a go…
…is my favorite. Simply because it is very basic (arrow up/down and spacebar), and it works for everything I’ve ever needed.
I partitioned two physical drives, one with many smaller partitions (2 to 9 GB each x 7 partitions) for operating systems, and a second 60GB drive with one big FAT32 for all my audio, video, and data. Since almost every OS has FAT support, this has worked out very well. I’m running that machine with a 766Mhz Athlon CPU – so I haven’t given virtualization much of a try as it only has 382MB ram (I like old machines – with old OS’s). I’m not so much a glutton for punnishment as I am nostalgic for the days of clean, efficient platforms.
Running installed:
*Win98SE
*BeOS Max edition
*Fedora Core 5
*FreeBSD 6
…and I play with the following LiveCDs in my free (rare) time:
*FreeDos
*Plan9
*Bluebottle
*SkyOS
*ReactOS
(bluebottle is quite interesting if you have the time to learn oberon)
I use to do the partition thing, then I did multiple hard drives. I ended up just hating having to close all my apps and shut down to switch to a different OS. Using cheap, old hardware and having an OS or two on each turned out to be the easiest thing for me.
Removable HD caddies. Just pop out your Win drive and stick in your Red Hat or wahtever drive. Thus there is no corruption possibility, etc.
I have a amd64 machine where I’m dual booting Ubuntu and FreeBSD, 2 laptops and 2 pc’s acting as servers and promised myself I will never put windows on them. I’m a happy FreeBSD and OpenBSD user and will promote them In every way I can.
So the answer is:
My primary os is an alternative os:-)
hda1: Debian netinstall, GRUB
hda2: other distro (currently SMGL, but I’m thinking of trying ZenWalk)
hda3: Windows
hda5: shared/FAT32
I got tired of the bootloader being done over each time, some distros wouldn’t install GRUB (Puppy was my first choice), and DSL crashed too much. So, Debian Stable, which is basically a rescue install and big boot loader config partition.
With hibernation, and using less bloated Linux distros, shutting down and booting back up is typically less than a two minute process.
In the future, I will likely go with separate PCs and a KVM (a Q-Bea for sure, if it ever comes out ).
I multi-boot – I used to run only one OS at the time (freebsd or a linux distro), but after I got myself an extra harddrive, my setup is like this:
Slave harddrive:
10GB windows xp
25GB linux
5GB for testing other OSs and distros
Master harddrive:
1GB swap
74GB /home
I’ve messed up (or removed) my bootloader many times, but it’s not really much of an issue; I just reboot into a livecd, mount a few filesystems, chroot and reinstall grub from there.