From BSDForums: Marko Zec has setup patches against FreeBSD 4.7 kernel sources, which provide the functionality of maintaining multiple independent network stack images within a single operating system kernel. No userland patches are necessary, except an additional virtual image management util.
The server wont resolve for me. Can someone expalin what the point of virtualizing network stacks is?
Functionality
Within a patched kernel, every process and network interface belongs to an unique virtual image. Each virtual image provides the entirely independent:
* set of network interfaces and userland processes;
* interface addresses and routing tables;
* TCP, UDP, raw protocol control blocks (PCBs);
* network traffic counters / statistics;
* set of net.inet tunable sysctl variables (well, most of them actually);
* ipfw and dummynet instance;
* system load and CPU usage accounting and scheduling
From the userland perspective, all the virtualization modifications within the kernel have been designed to preserve the complete API/ABI compatibility, so absolutely all existing userland binaries should be able to run unmodified on the virtualized kernel. Furthermore, as there are no address translation hacks, library replacements/hooks etc., the overall performance penalty of introduction of virtualization layer is mostly neglectable.
Within the kernel, the API compatibility is preserved on the device driver layer, however most modules will require recompilation, and some of them source code modification, because of API changes in the higher level networking routines and data structures.
Additional goodies contained in the above patch include:
* “ve” virtual ethernet clonable interfaces, which can be created on demand, assigned to a target virtual image, and then bridged either internally or externally through a real physical ethernet interface, to provide the convenient access to the outside network from within the virtual images. This feature will be mostly useful in virtual hosting applications
* “vipa” virtual internal IP address interface – a loopback type interface, which enables transparent binding of all outgoing TCP/UDP sessions to the IP address configured on this internal interface. This can be very useful for enhancing the robustness of sessions originating from / connecting to a system with more than one physical network interface, in case of changes in availability of one of the real interfaces. The idea is borrowed from IBM’s OS/390 V2R8 TCP/IP stack implementation.
* hiding of “foreign” filesystem mounts within chrooted virtual images
What does this mean for the hobbyist programmer?
What does this do for server sysadmins?
It seems nice, but what could we DO with it?
-JM
The same thing you do with Virtualized OS’s (without the overhead of the rest of the system)?
Run a production and a testing server on the same machine different IP’s.
Have two versions of something bound to two IP’s one using dummynet (to cap bandwidth) the other unrestricted, each also having different IPFW rules?
There are a plethora of uses which may or may not be immediately apparent.
With independent CPU usage accounting and scheduling, could this also be used for heightened system security, ie. run different groups in different stack images?
Just a (naive) thought,
-wtg
I believe that tgilleland is right, if there is not a lot of overhead for running the virtualized images then why not use it as a restrictive tool for given processes/groups/users
=)
i am excited about the virtualized network interfaces, i could see using that for running apps and such that i am not too sure about, eg, i don’t really want running on main os as they may not be too secure and such
there are so many things that this can be used for, sounds great to me!
It could be used as a way of prioritizing memory and
CPU resources. There are many times i’d like to give
priority to certain clients using the stack.
Well, now I’m intrigued. I just needed a little help.
By the way, can’t those IBM z-Series servers do something similar with those virtualized Linux images? Hmmm…
–JM
Virtualized Linux Images are actually different Virtual Machines – everything is virualized, not just network stack.
Of course, it’s not only hardware function but also OS.
OS/390 is a real monster – of top ten supercomuters in the world 7 are mainframes.
And don’t forget, z-Series systems are multi-CPU boxes with hot-pluggable CPU modules (try to beat that !). Besides ,I/O on mainframes is done through so-called channels , it’s the whole different architecture.
To have multiple Virtual Machines at the same time (like VMware does) on single CPU system with IDE harddisk has very little sense – only practical usage is QA testing.
Things get more interesting when you step in the world of water-cooled computer systems – where you can literraly step into computer.
That’s why it makes sense to run Linux inside OS/390 VM – unbeatable I/O, Apache web server smokes