In Linux distros, how do you know how much space to assign for each partition? And what if you do this and then later run out of room? Well you could delete data or move it off to other partitions, but there is a much more powerful and flexible way. It’s called Logical Volume Management. LVM is a way to dynamically create, delete, re-size and expand partitions on your computer. It’s not just for servers, it’s great for desktops too! How does it work? Instead of your partition information residing on your partition table, LVM writes its own information separately and keeps track of where partitions are, what devices are a part of them and how big they are.
If anyone is interested, I have turned this article into a PDF. You can get it by going into my profile, clicking on my web page link and then going to the Teaching section. Alternatively, I can e-mail it to you if you PM me.
what are the performance implications of using LVM? is an ext2 or xfs partition slower via LVM?
are there other volume managers for Linux? what are their performance characteristics?
ext3 and xfs are a bit slower on LVM, because LVM doesn’t support barriers. Also there are cases when xfs stacked atop LVM stacked atop other drivers can overflow x86’s 4kB kernel stack. It’s also possible to make a mess of the mappings, so the logical block numbers of the virtual device are randomly scattered across the physical devices. That takes quite a bit of effort to achieve, though
What we call LVM these days is actually LVM2, built on top of Linux 2.6’s device-mapper (LVM1 is Linux 2.4’s volume manager). EVMS used to be a nice front-end to LVM, but is kinda unmaintained nowadays. mdadm handles RAID, which you might think would be part of volume management, but dm only supports RAID-1 for now.
Going proprietary, VxVM (Veritas Volume Manager) is available for Linux too. I have no idea how it compares.
The way I understand it, the fact that LVM doesn’t honour write barriers doesn’t have any bearing on performance. However, it does mean that journalling filesystems aren’t 100% protected from corruption after power cuts.
My recollection was wrong — it seems that when barriers are turned off, Linux filesystems act unsafely instead of simulating barriers with flushes and waits. So the lack of barriers actually improves performance.
Turning barriers off at the filesystem level will certainly improve performance. However, if barriers are enabled for a filesystem on an LVM device, you’ll get the overhead of using barriers in the filesystem code (I believe), but those barriers won’t be honoured by LVM.
If this is the case, when using LVM it makes sense to disable barriers on the filesystems in question, as they’re not horoured and will decrease performance.
Mandriva’s diskdrake support both LVM and RAID.
OpenSuse’s yast support LVM and I suppose also RAID.
When you choose distro, you should look for more than just browser plugins and a nice theme…
Every enterprise distribution supports LVM on top of mdadm RAID these days — Debian’s installer partitioner supports it, as does Fedora’s and RHEL’s.