The Solaris Containers technology addresses this void by making it possible to create a number of private execution environments within a single instance of the Solaris OS. This paper provides suggestions for designing system configurations using powerful tools associated with Solaris Containers, guidelines for selecting features most appropriate for the user’s needs, advice on troubleshooting, and a comprehensive consolidation planning example.
It could be useful. Still, despite the fact that you can specify the resource usage per container, a given system will still have a limited amount of SOMEthing, whether it be CPU time, peripheral access… so I imagine the number of containers per system would be limited.
It seems that it is best for access control and for acting as a sort of “quota” system – but not for disks alone, for every aspect of the hardware.
Now, VMS already has (for as long as I can remember) resource allocation per user as part of its user management as does any time-share system, I would imagine, as well as strict access control if you care to use it. This sounds like it takes it a step further with virtualization of the hardware and the fact that it is “all on” (restrictions) and can be lightened by the administrator if necessary.
I also like the fact that each container can have its own network addressing and can also be rebooted individually and that this container technology also have an API, apparently. This much is way beyond the old technologies.
Edited 2006-05-11 10:56
It could be useful. Still, despite the fact that you can specify the resource usage per container, a given system will still have a limited amount of SOMEthing, whether it be CPU time, peripheral access… so I imagine the number of containers per system would be limited.
On modern systems, the limit of the number of zones is extermely high, each zone can use as little as 100MB of diskspace, and less if you use ZFS (35MB), the ram foot print of a zone is less than 20MB as well since it uses a shared kernel, and the basic daemons use shared memory as well.
For example http://blogs.sun.com/roller/page/jclingan has a Sun ultra 5 with 190 zones installed. This is a 8+ year old machine, single cpu 300-440mhz and a maximum ram of 1GB. Note the zones really aren’t usable, but the machine could easily cope with 50-75 active zones.
When I had access to a slightly larger Sun Server I installed 25 zones on it http://uadmin.blogspot.com/2005/05/invasion-of-clingan-zone.html
the loadavg was just .06 so it could easily run 400+ zones. Yet this two is an old 4x 400mhz cpu box with 4GB. If you expand this to a 4way quad opteron box with 32GB of ram you could easily have 2000 or more zones running on it. For a hosting company you can basicly give each customer there own zone.
Too sum it up Zones will give you as many zones on a system as you need. Usually raw cpu power, or fear of putting all your 100’s of eggs in one basket(system) will keep you from adding more zones.
Not to poop on your parade, as I agree with *most* of what you said:
“When I had access to a slightly larger Sun Server I installed 25 zones on it http://uadmin.blogspot.com/2005/05/invasion-of-clingan-zone.html
the loadavg was just .06 so it could easily run 400+ zones. Yet this two is an old 4x 400mhz cpu box with 4GB. If you expand this to a 4way quad opteron box with 32GB of ram you could easily have 2000 or more zones running on it. For a hosting company you can basicly give each customer there own zone.”
That is, if only a few % of the zones had anything running. With a loadavg of .06 the 25 zones you had must have been doing nothing.
# zoneadm list -cv
ID NAME STATUS PATH
0 global running /
11 zone4 running /export/zones/zone4
12 zone5 running /export/zones/zone5
13 zone6 running /export/zones/zone6
21 zone2 running /export/zones/zone2
23 zone1 running /export/zones/zone1
25 zone3 running /export/zones/zone3
49 zone7 running /export/zones/zone7
– zone8 installed /export/zones/zone8
# uptime
11:53am up 27 day(s), 3:14, 1 user, load average: 0.03, 0.02, 0.02
#
That’s a machine with the zones doing *zip*. The loadavg spiked to .6 when I tossed a decently popular website on apache inside one of those zones. I can’t imagine 400 active zones… yuck…
I think the point was that the virtualization provided
by Zones does not cost much in itself. By this I mean
that it’s reasonable to consolidate 4 application environments (i.e. web servers), each ~20% utilized,
into 4 zones on a single server. The cost of the
virtualization is low hence the practical limit is the
resources available on the machine and the demands of
the application you want to run. This is contrasting
to some other virtualization techniques where the
cost of virtualization is high and/or there are defined
limits on the number of virtualized environments
available.