Zones are one of the many useful new features in Solaris 10, both for adminstrators and developers. This article gives an example of using zones to replicate a multi tiered environment for use in development.
Zones are one of the many useful new features in Solaris 10, both for adminstrators and developers. This article gives an example of using zones to replicate a multi tiered environment for use in development.
Nice writeup Fintan, I hope that there are folks out here who find it either interesting and/or useful.
Alan.
This is the best piece of marketing; hearing how to use the tools straight from the source. It ensures the sales critters don’t balls up promoting technologies that would be otherwise great sales winners.
Sun’s website says that full Linux interoperability is on its way for Solaris 10, so one Opteron server could support even multiple Solaris/Linux tiers (say if an app server needs Linux). This is nice stuff!
If Sun were to manage getting Windows 2000/2003 into a zone (as if it were a SunPCi environment), that would be really cool. SunPCi isn’t the fastest (Celeron/Mobile Athlon), but it is extremely useful for boxing Windows into a place it can do no real harm. Instead of getting some crusty Celeron, imagine getting a slice of an Opteron 252!
Solaris comes with alot of neat features like zones, nfsv4, and dtrace. I can’t wait til OpenSolaris (http://opensolaris.org/) hits the web, definitely exciting times come for *NIX enthusiast. The best part of OpenSolaris is that it’s OSI approved, which means that anyone can fork it. Would be kinda neat to see all sorts of OpenSolaris based distros, LiveCD’s, and stuff.
There have already been comments about a PowerPC port of OpenSolaris and a Gentoo distribution around OpenSolaris. OpenSolaris really will be open source, just like Linux and the BSDs. The CDDL-GPL license trolls really don’t have a lot of ammuntion for their FUD, and it’s already running thin. The main thing to consider is that the only software really affected by the licenses, anyway, will be the kernels. So what–all the user applications don’t care.
“The CDDL-GPL license trolls really don’t have a lot of ammuntion for their FUD, and it’s already running thin. The main thing to consider is that the only software really affected by the licenses, anyway, will be the kernels. So what–all the user applications don’t care. ”
by far the most common open source license is GPL. so not just the kernel space will be affected.
http://www.dwheeler.com/essays/gpl-compatible.html
> http://www.dwheeler.com/essays/gpl-compatible.html
The above url has a title of –
<em>Make Your Open Source Software GPL-Compatible. Or Else.</em>
The title of that article really does say a lot about the attitudes of the GPL zealots. If you don’t do what we say were going to huff, and puff and blow your house down. And if we can’t its your fault anyway and we are going to tell my buddies on slashdot about you.
The article you linked to mainly has evidence about numbers of projects, but most projects are not large or even active (comparing lines of code accross the board would be more interesting). Also, the cited opinions from Groklaw don’t really make a strong case one way or another, becuase that site is known to be biased and very cynical. Overall, the article is a good opinion piece.
I have to maintain that only the kernels are really affected. For example, on my install of Solaris 10 (not even CDDL), it has X.org, GNOME, OpenOffice.org (StarOffice), and dozens of other apps under various OSS licenses. There is even GCC as part of the core supported install. The apps run on top of the kernel, so licensing is not an issue. Further, I have /opt/sfw and /opt/csw directories chock full of OSS software under various licenses. My login shell is even bash.
As for “GPL compatibility”: The CDDL is a per-file license, meaning that there shouldn’t be a problem having GPLed files and CDDLed files in the same project–that is if the GPL allows it. From what I’ve read, it appears it is the GPL that is incompatible with the CDDL and not the CDDL that is incompatible with the GPL. The CDDL allows combining at the file level and licensing binaries under different licenses, and the GPL does not.
In fact, at the FSF site, they say that “compatibility” means the resulting combined program has to be released under the GPL! It is a one-way path, never leading away from the GPL. The only reason so many people see no fault in this is due to current fashion, just like when people saw no problem with polyester leisure suits so long ago.
People can and should choose the licensing scheme that suits them best. Often, that means not choosing the GPL. The BSD crowd maintains their need for absolute freedom, for example. Sun found they needed what the MPL offered, mainly concerning patents, it appears. Ultimately, whether or not something is “GPL compatible” only means if someone can cut and paste. Cleanroom implementations are always allowed, anyway.
At the article you linked to, there is a whole anecdote trying to pin the problems with XFree86 with the license changing to be “GPL incompatible”. The consensus in real life was that people just didn’t like XFree86’s management and the slow progress of the project, so they used the license change as the “straw that broke the camel’s back” in order to justify a fork. I really don’t think people care all that much about the license, as long as it is sufficient for open source to function. This is the whole basis of the OSI (Open Source Initiative) program–which isn’t always eye-to-eye with the FSF, I might add.
Who cares if Solaris is CDDL? It’s the people who want to use Sun’s code have to deal with that.
Tho, I don’t know why they didn’t release their code under the CDDL and GPL to see which license was going to get more attention then decide which one to pursue.
It’s too bad tho, the way Sun’s been making their decisions. Being realively open, concerned with the quality of technology, and not obsessed with patents locking people out would have made Sun a great GPL community member. But Sun’s actions made it possible for IBM step in and be what Sun could have been. It sucks because IBM is patent happy and I don’t trust them for it.
Please stop the madness—-this is a thread that is supposed to be about Zones in Solaris 10!?!
CDDL vs GPL vs who cares!?!?
What about Zones in Solaris? — I think this is a “killer” feature for *any* NIX OS. Does it work or not, please give me some REAL INPUT?!?!
I’m sick and tired of X against Y — let’s here about how Zones work!
I think Zones would work great in my little data center, how about yours?
Please move on my fellow friends…..
amusingly, as I wrote the blog posting that was linked back in January, and on seeing it posted I was expecting some questions about the article. Then again two of the best presentations I have seen in a long time were linked earlier this week on Solaris DTrace, and not a single DTrace question was asked (just the standard, and already rebutted, ltt and kprobes do that answer – not a question, more a lack of understanding). But people had to get a license debate playing out – why I ask? (I could give a million and one reasons, but I’m interested in engineering and tech, and real politics – not license politics).
Why are people so up in arms with the CDDL – if Sun had used the MPL, with its obvious, and documented flaws (as noted in Suns original submission of the CDDL to the OSI) would that have been deemed acceptable?
Anyway, for the people interested in the technology, if you have questions on the article let me know, I would be more than happy to answer any queries, and if I can’t there are thousands of engineers ( http://blogs.sun.com ) in Sun that can.
Hi,
> What about Zones in Solaris? — I think this is
> a “killer” feature for *any* NIX OS. Does it work
> or not, please give me some REAL INPUT?!?!
Yes it works, it works damn well. That example I gave had three zones (plus the global zone) running on my laptop, it barely registered on the system in terms of resources – John Clingan ( http://blogs.sun.com/jclingan ) has gotten 600 zones running on a single v880 ( http://www.sun.com/servers/midrange/v880/ ) server, that was before he had to hand it back to do work, and 190 on an Ultra 10 (which is in fairness a what, seven, eight year old machine). You combine zones with the accounting options on solaris, processor sets etc, and you can pretty much partition up stuff any way you want – and securely.
Download Solaris 10 and play around with it, any questions let us know – we really want people to use these features. They were designed for the datacenter.
”
In fact, at the FSF site, they say that “compatibility” means the resulting combined program has to be released under the GPL! ”
false claim. there is no such statement
Don’t feel bad, my article about Solaris 10 got stupid posts about ugly fonts! Never mind the fact that I never mentioned fonts in the entrie article, or that I had any problems with fonts. I read all of the pieces listed and they are nice articles which demonstrate DTrace and Zones very well.
The question I have is if Linux is “so good”, why don’t we see pieces like the DTrace and Zones materials on here? All we seem to see and hear is benchmarks, general bitching, and complaints about licensing (which really doesn’t matter from an Enterprise viewpoint anyhow). I see a lot of people talking, but not many actually using and writing about their experiences with FOSS/Linux products.
One other thing I would like to see is more “on topic” posts, the people who spend hours on end writing articles for publication here or anywhere else, to see their work reduced to a FUD fest is to say the least disheartening. I was thinking about writing another piece until I saw some of the responses to the article which makes me wonder if the people posting actually read it? The quality of the submitted articles here is partially in propotion to the responses (both quality and quantity) received. If you want better articles here, then maybe you should respond in kind rather than start a FUD war!
”
The question I have is if Linux is “so good”, why don’t we see pieces like the DTrace and Zones materials on here?”
being good doesnt mean replicating every feature of every other operating system. It just meants its able to do things thats is expected to do.
Linux has several features not in Solaris by the admission of solaris developers themselves
if feature set is the only thing that ever mattered you would never see alternative operating sytems step up as competition.
From a purely technical point of view, dtrace , solaris zones are great pieces of tech, that have been implimented well, and yet no-matter what Sun seem’s to try and promote the use of them, people don’t seem interested.
As someone pointed out, every time dtrace, zones (or smf) are mentioned, little discussion about the tech and more about the licencing appears. This seem’s to happen everywhere on the net – not just within osnews, but all the tech sites.
I feel that most people don’t really care about the licensing; its just that they care even less about dtrace ,zones and smf. We’re more interested in a choice of vendors, and a broad range of support for our favourite version of *nix (usually Linux) – all as cheaply as possible.
An example: Xen
Xen provides similar functionality to Solaris Zones for an administrator, yet everytime a Xen story pop’s up, people are talking about it. Why? It’s a safe bet for systems administrators & developers – it’s not just backed by one company, but several (Novell/RHat/IBM/HP) as well as the academic world. It run’s not one commercial version of unix, but several commercial and free ones too. System’s administrator’s are safe in the knowledge that learning the Xen way of doing things is useful – if they decide to switch from being a Dell house to a HP house running RedHat instead of Suse, it’s still a useful thing to know.
Or is there something else of missed?
I am planning on consolidating some real old servers we have lying around onto one older PIII with 512MB of RAM. The old servers ran NTP, DNS, and the last one ran Apache, Sendmail, Cyrus Imap. I see having 3 zones for each of these to run, but am concerned that the performance would be real pokey. I know that Solaris 10 likes more than 128MB of RAM, so is this going to be a problem?
If features didn’t matter, everything would be the same and there would be no competition at all! Why does a business select Windows, Solaris, AIX, or Linux over any other operating system? Because a particular feature, or feature set appealed to that customer. What you use is based on a set of features that appeals to you. So features do matter, and an awful lot of commentary on this site has to do with features (like media player choices).
I also think you missed the point of my response, I don’t see a lot of quality pieces about Linux features on this site. It is either because they are hard to find, or they are not being written. And not everybody knows every feature of a given operating system, to say that would be arrogant to say the least. As any piece of software changes, features are added or subtracted. And it is very easy for people to miss the “gold” in something because they do not know it is there.
So how about contributing something more relevant (like an article), rather than posting that a comment is “crap”.
How many machines are you trying to consolidate? This is what I would do, I would determine the average system load for each machine in terms of memory, CPU, disk utilization, and bandwidth utilization. Then set up a test enviroment and test with the applications you want to consolidate to see if the performance is acceptable. If you are satisfied with the results, then go to a limited test in production.
For bandwidth intensive applications you might want to consider adding more network interfaces.
“f features didn’t matter, everything would be the same and there would be no competition at all! Why does a business select Windows, Solaris, AIX, or Linux over any other operating system? Because a particular feature, or feature set appealed to that customer. What you use is based on a set of features that appeals to you. So features do matter, and an awful lot of commentary on this site has to do with features (like media player choices).”
NOTE: I said features arent the only thing that matters. case in point. Mozilla suite has more features than firefox
People ARE interested in DTrace and Zones. We do use both in our datacenter (on x86, x64, sparc). And as there’s really little overhead in using Zones we put every production app in a zone, and development envirments are in zones too. DTrace – it saved us a lot of developers time. We use it very often. And I know other companies which are using Zones. DTrace is a little bit harder to use then Zones so addoption rate is lower, and not every one develops apps or need to squize evey cpu cycle from server.
Maybe it’s that Solaris user base are mostly datacenter folks, etc. – and these guys are not so visible on public forums, lists, etc. All they want is to have job done, and Zones just work (no patching, recompiling, hacks, lack of documentation, etc.).
On the other hand it would be nice if Solaris attracts more “home” users – and I think it will happen as Solaris 10 is free to use, and after Open Solaris hits the streets.
And there’s one more thing – documentation. Sun really provides (well, most of the time) excellent documentation for SOlaris features. Want to use Zones? Read documentation and it’s just works! Well, actuallt Zones are so simple for most configurations then reading only some examples should be sufficient. DTrace documentation is really good too.
I for example, do not feel a need to write “wow! Zones are cool!” as a comment to every article about Zones. I know that, I use them and they work. And frankly I do not have a time to be publicly excited every time article about Zones appears on the net.
“People ARE interested in DTrace and Zones. We do use both in our datacenter (on x86, x64, sparc).”
too early to be using it data centers IMO. your box. your mess anyway
How about explaining the “risk” to us, using DTrace is far less risky than using mdb. And that is some nice attitude there “your box. your mess anyway”.
I don’t buy your argument. Most of the people get into this spiraling flamefest because
a) There is not much to left to point out why linux is better than Solaris, so it becomes a “but Solaris is not GPL debate.
b) Most of the posters have used linux and type configure, make, make install are not coders or sysadmins by profession, they just like the power and control that linux gave them and are defending it for life without a clue on how systems are used in the real world.
In my debates I have found a very few really technically sound posters on OS news. Most of the posters here are just here to vocie an opinion and not discuss the articles in questions. Very few articles are actually debated on true technical merits.
Say a linux article is posted most of the linux people would debate is most of the time reasonably. Most windows related articles also go reasonably.
Mac or Solaris articles seem to strike a cord and usually degrade into troll fests. The main reason is Macs offer something linux and windows desktops don’t, so Mac related articles degrade into “Macs are expesive, Macs are just as insecure” arguments.
Solaris articles used to degrade to “Linux out performs Solaris”, “Solaris is not open source”, “Sun is losing money” arguments. Now that most of those are not longer true the discussions have moved toward licensing.
just my 2 cents.
Anonymous (IP: —.162.152.5.static.vsnl.net.in) has not clue about any of the Solaris technologies in question.
he/she actually thinks cpusets in linux are the same as Zones in solaris. If you read the other 90+ post Solaris article it is quite obvious. I seriously doubt this person has any real world experience in deploying and administering large complex systems. or atleast it doesn’t show in his attitude or arguments.
Let it go, please.
What’s up? I see we are at the same point in time again, talking about another Linux troll. I noticed when I challenged Anonymous about DTrace vs. mdb and his poor attitude he dissapeared. And they wonder why we don’t take them seriously.
I read about CPUSETS and I thought what an arcane way to allocate resources, nothing like Resouce Management in Solaris.
it usually happens this way. I am slowly learning to not engage in arguments with trolls.
cpusets are similar to processorsets in Solaris one part of Resource Management.
Linux so far has not had a serious role to play in the enterprise, IBM,HP and a few other companies are finally putting features enterprise customers want into linux. So you are now seeing all the features Solaris has had for a while being added to linux.
That is something I have noticed about Linux in general, is how it tends to copy what someone else has done. I don’t see a whole lot of innovation in the various products.
That is something I have noticed about Linux in general, is how it tends to copy what someone else has done. I don’t see a whole lot of innovation in the various products.
It’s a matter of perspective and your definition of linux. Linux in the strictest definition of being the kernel, is not very innovative. But when most people talk about linux they talk about GNU/Linux and the whole set of distributions.
When you talk about linux as a complete OS there have been a few innovations. For example, X windows was stagnant till linux came into the secene, and lot of good technologies and extensions were brought into X. One might argue that that had nothing to do with linux but the opensource movement and one would be partially correct.
However, Linux and it’s advocates are slowly understanding the needs of enterprise customers. Most linux users have traditionaly been desktop users so innovations in linux tend to be in that space, but the desktop innovation usually tend to be paltform agnostic even though the base development paltform is linux.
On the enterprise side of things companies like IBM, HP and SGI who have a stake in linux’s sucess in the enterprise are porting features they need to be able to sell it to thier enterprise customers. So you see a whole lot of very old enterprise class features been added to linux.
The whole problem with this approach is everyone is trying to make linux a solution for every problem. It is hard to imagine that one kernel will eventually be able to satisfy the needs of all the markets without fragmentation. Also, enterprise customers don’t like rapid and constant changes and might eventualy be turned off by linux’s constant change. Linux’s adoption rate into the enterprise will start to drop as time moves on. This will happen wen Redhat/SUSE can’t afford to keep a set of engineers working on 5 year old kernel releases becuase everything has changed drastically in the linux space and the quality of support drops.
It is also hard to imagine developers being happy with constantly changing APIs and interfaces. What I said above will also be true for ISVs that need to support a piece of software for a longer period than one year. Linux has just started to become a serious contender in business and we have yet to see these kinds of problems surface, but it is bound to happen.
“How about explaining the “risk” to us, using DTrace is far less risky than using mdb. And that is some nice attitude there “your box. your mess anyway”.”
any new technology is risky obviously. yes your box and your mess otherwise you would be telling me not to be bossing around.
“he/she actually thinks cpusets in linux are the same as Zones in solaris”
off the hook. i said they were comparable and its a FACT. its very obvious
“I noticed when I challenged Anonymous about DTrace vs. mdb and his poor attitude he dissapeared”
buddy. why did you even bring mdb into the argument. if you want to talk about comparable features talk about dprobes, kprobes, ltt and oprofile
“That is something I have noticed about Linux in general, is how it tends to copy what someone else has done”
you havent updated yourself
”
I read about CPUSETS and I thought what an arcane way to allocate resources, nothing like Resouce Management in Solaris. ”
explain exactly what is arcane about it. next thing you would be calling xen a arcane technology just because linux developers are involved.
“Linux so far has not had a serious role to play in the enterprise”
wake up to the reality. sun isnt talking about opensolaris for nothing. they LOST huge amount of market and mind share to Linux esp Red Hat Linux. this is why Jonathan from your favorite SUN is LYING about Red Hat Linux being a proprietary distribution and not LSB compliant.
Ask those liers to name a single proprietary software in Red Hat Linux and they disappear. Show them /etc/redhat-lsb and they never come back. pathetic
“Linux so far has not had a serious role to play in the enterprise”
May be I should rephrase that to make my intensions more obvious.
Linux up until recently has not had a serious role to play in the enterprise…..
“Linux up until recently has not had a serious role to play in the enterprise….. ”
heard of google?
have you heard of amazon ?. which year was linux deployed there.
1998 isnt recent
wake up to the reality. sun isnt talking about opensolaris for nothing. they LOST huge amount of market and mind share to Linux esp Red Hat Linux.
You really need to wake up, pal. Sun is still the number one UNIX vendor on the planet and has been shipping more units than ever before. The market is now tipped toward lowend boxes.
The midshare game has more to do with Open source software, not linux in particular, a lot of government agencies accross the world want to mandate all software they use be open source. Sun would be foolish to not to listen to it’s customers and they have apporpriately responded to a changing market.
It’s all perception. linux was considered to be faster than Sun (Sun used here deliberarely from conversations with real perople). People conflated x86 performance with linux. It has been shown that linux and Solaris on the same hardware have little to no difference in performance. The differenctiating factors are features, license and supprot. Most of which are in parity, the differentiator here becomes support and features.
IBM and HP push you to Redhat/SUSE, they won’t use thier support infrastrcuture unless you purcahse a specific level of support. What happens here is unless IBM and HPs support contracts are cheaper than Sun’s customers will be constantly ping ponged between “it’s an IBM problem/ it’s a redhat problem”.
OK, I’ll give you the X part. You tend not to think about a product if you don’t use it much. I only use X (CDE) at work because of the method we use to access the other network (Sun Rays).
And that is the crux of the problem with Linux, I don’t necessarily see the fragmentation as being a good thing. Too many distros, too many opinions as to how something should work, choice is not always good. I also see excellent potential for “vendor lock-in” with Linux being supported by IBM, HP, and SGI. And that is not good at all.
”
You really need to wake up, pal. Sun is still the number one UNIX vendor on the planet and has been shipping more units than ever before. The market is now tipped toward lowend boxes. ”
where did you get that statistics from ?
did SUN lose market share to Redhat Linux – yes
did SUN people LIE about Red Hat Linux being proprietary and not LSB compliant – yes
did Jonathan admit that he was inclined to talk about open source solaris because of Linux – yes
did Oracle switch from solaris to Linux as the development and the reference platform – yes
did Sun switch between supporting solaris on x86 on and off alteast three times – yes
where does that leave you fanboys
“I also see excellent potential for “vendor lock-in” with Linux being supported by IBM, HP, and SGI. And that is not good at all.”
irony. all these companies support Linux which by itself shows that competition is wide and there is no vendor lock in
vendor lock in simply not possible with GPL’ed products. Companies like Redhat offer service and support never results in lock in
”
The midshare game has more to do with Open source software, not linux in particular, a lot of government agencies accross the world want to mandate all software they use be open source. Sun would be foolish to not to listen to it’s customers and they have apporpriately responded to a changing market. ”
pity you dont realise that goverments already have SWITCHED. they arent going to jump to solaris for dtrace and zones. you are kidding yourself
and btw there is still NO opensolaris after 5 years of talk.
heard of google?
Google is not a traditional enterprise thier whole business model revolves around designing of faliures in hardware and software, read the GFS paper, they support themselves, linux just gives them prewritten software that they don’t have to write themselves. Almost everything in thier infrastructure is thier own technology, which is explictily desgined to worksround failures. They higher a large group of people to make this work.
have you heard of amazon ?. which year was linux deployed there.
1998 isnt recent
Amazon is an all hp shop. HP sold them linux for thier webservers and thier hp/ux boxes for the backend. In 2000, they werre still not an all linux shop and not in 1998 for sure, not even now for that matter.
http://news.com.com/2100-1001-275155.html?legacy=cnet
That was a HP sun battle and not a linux Sun battle.
“Almost everything in thier infrastructure is thier own technology, which is explictily desgined to worksround failures. They higher a large group of people to make this work. ”
buddy. tell that to someone who hasnt actually seen the google infrastructure. they have a few cluster machines running a indexing filesystem. all their other boxes run stock redhat linux. nothing special about it at all. we just setup 120 machines with rhel 4 for them last week in bgl
“In 2000, they werre still not an all linux shop and not in 1998 for sure, not even now for that matter. ”
who cares if they were “all linux” or not. they still were using Linux starting from 1998 for critical purposes. it was one of the major events paving way for adoption of linux in the enterprises.
“That was a HP sun battle and not a linux Sun battle.”
regardless of whose battle it was its clear that Linux has been used in the enterprise for a long long time
buddy. tell that to someone who hasnt actually seen the google infrastructure. they have a few cluster machines running a indexing filesystem.
Isn’t that the core of thier technology?
who cares if they were “all linux” or not. they still were using Linux starting from 1998 for critical purposes. it was one of the major events paving way for adoption of linux in the enterprises.
No, from 2000. and not for crtical purposes.
regardless of whose battle it was its clear that Linux has been used in the enterprise for a long long time
Solaris, AIX, HP/UX and Windows have been for a lot longer. So linux is very recent in the scheme of things.
OK i am going to stop this discussion here, it’s pointless.
“Isn’t that the core of thier technology? ”
no. the core strength is not failover clustering. thats TRIVIAL to replicate. the real strength lies in amount of people with huge amount of experience in search algorithm and innovative ideas. in other words its the people in google that make it worthy
“No, from 2000. and not for crtical purposes. ”
crap. in 1998 amazon was running it was for ecommerce database. i called that critical.
”
Solaris, AIX, HP/UX and Windows have been for a lot longer. So linux is very recent in the scheme of things. ”
i repeat 1998 isnt recent at all
“OK i am going to stop this discussion here, it’s pointless.”
wimpy
http://www.eweek.com/article2/0,4149,1455159,00.asp
Soon, Amazon will be able to add another title to its collection: pure-Linux enterprise.Killalea announced in his keynote presentation at LinuxWorld here on Wednesday that Amazon’s last holdout application—the company’s 14-terabyte-plus data warehouse—will be moved over to Linux servers running Oracle’s Real Application Clusters (RAC) software by the end of the second quarter of this year………
Amazon started its move to Linux in 2000, when it switched its Web servers to the open-source operating system. Over the past four years, Killalea said, the Seattle company has moved more and more of its infrastructure from Sun Unix servers to HP ProLiant servers running Linux……
Sorry your 1998 date is way of and your claim that they were running thier ecommerce database in 1998 on linux is laughable at best.
You have come up with two examples of linux in the enterprise. I can bring up thousands for any of ther other Oses I mentioned.
Amazon became a pure-linux enterprise only in 2004 about a year ago. A year is very recent.
”
Sorry your 1998 date is way of and your claim that they were running thier ecommerce database in 1998 on linux is laughable at best. ”
laugh at yourself. Amazon did setup ecommerce databases in Linux with the help of HP and HP didnt use Oracle. HP had a full blown center basically for this. we managed to finish the complete integration and move on to production on oct 1998.
you can only quote random links. what i am talking about is experience in real life. you are talking about the oracle story instead of the amazon story
“Technology is not your cup of tea. ”
sure. been in the business for a decade. saw solaris fade away and customers demanding linux. asia is more ripe place for a big picture view of whats going on
> buddy. why did you even bring mdb into the argument. if
> you want to talk about comparable features talk about
> dprobes, kprobes, ltt and oprofile
Sigh. Labourious sigh. This has been hashed over so many times, so rather than forcing people to go look for the urls each time I’ve collected a few standard rebuttals on my blog.
http://blogs.sun.com/roller/page/fintanr/20050306#dtrace_compared_w…
Please note that these include a blog posting by Daniel Berrange of Redhat, and a slashdot posting by John Levon author of OProfile.
Now actual work calls.
Hi,
The ntp and dns stuff should be a snip, do you have some details on the usage levels of your apache/sendmail/cyrus box? How much IO etc is going through it.
Solaris 10 has a min requirment of 128Mbs, more details can be seen at http://docs.sun.com/app/docs/doc/817-0544/6mgbagb11?a=view
Also do you have some details on the box that you are planning on deploying too?
Robert Escue’s comment http://www.osnews.com/comment.php?news_id=9886&limit=no#343661 pretty much covers the advice I’d give regarding testing before going into production.
– Fintan
“false claim. there is no such statement”
My claim is true.
http://www.fsf.org/licensing/licenses/gpl-faq.html#WhatDoesCompatMe…
“The GPL permits such a combination provided it is released under the GNU GPL.”
In short, the GPL is the ultimate assimilation license, which is what the FSF intended. This is fine for their purposes; it’s just that their purposes aren’t universal.
The thing that is so amazing is that I don’t look back at OSNews for a whole day, and I must say it isn’t encouraging how less than a dozen comments bloom exponentially into a really stupid flamewar. It is clear most people have no real understanding of licensing issues, and they just regurgitate the sound bytes they are fed to them by articles linked at Slashdot. It’s like conversing with cult members.
Amazon did setup ecommerce databases in Linux with the help of HP and HP didnt use Oracle. HP had a full blown center basically for this. we managed to finish the complete integration and move on to production on oct 1998.
Post some proof for that statement. All public information says otherwise and that amazon moved it’s webservers to linux in 2000.
Also I find it funny that Amazon bought HP proliant servers in 1998 when the hp-compaq merger hadn’t yet happend. HP was not selling linux on PA-RISC in 1998, what was amazon running thier mythical ecommerce database on, thin air.
”
Also I find it funny that Amazon bought HP proliant servers in 1998″
did anyone say it was proliant servers?. go get the greese out please
did anyone say it was proliant servers?
Yes both Amazon and HP in thier respective press releases did. I would rather trust those two than an Anonymous troll who claims he helped in the migration in 1998 that never happened.
Since you claim to know inside information. What version of the kernel did you deploy? What was the Database software and version?
fintanr, I plan to install the Container Manager 1.1 beta on that same v880 manage those zones in a toolable manner. Yeah, the v880 got re-assigned for a while (bummer), but I am hoping to schedule some time on it …
”
Yes both Amazon and HP in thier respective press releases did.”
you are still talking about the second production system which used oracle and proliant servers. you are correct about that
“What version of the kernel did you deploy? What was the Database software and version?”
redhat linux 5.2. dont remember the exact kernel version. the first system used postgres with a wrapper library for easy transition
the system ran remarkably well in relative terms. postgres was pretty slow compared to the revision which used oracle. I am sure that a new system could use postgres thou
you are still talking about the second production system which used oracle and proliant servers. you are correct about that
No I am talking about the only time they used linux for thier database
http://www.theregister.co.uk/2001/10/31/linux_switch_saves_amazon_m…
“The filing states: “The decline in absolute dollars spent during the three months and nine months ended September 30, 2001 [on technology and content] in comparison to the corresponding periods in the prior year primarily reflect our migration to a Linux-based technology platform that utilizes a less-costly technology infrastructure, as well as general price reductions for data and telecommunication services due to market overcapacity.”
I wonder why Amazon would account for savings it derived from linux in it’s 2001 SEC 10q filing if they started to use linux in 1998. Either Amazon is lying in it’s SEC filing or you are. I know for certain you are.
redhat linux 5.2. dont remember the exact kernel version. the first system used postgres with a wrapper library for easy transition
the system ran remarkably well in relative terms. postgres was pretty slow compared to the revision which used oracle. I am sure that a new system could use postgres thou
Rigghhht? Amazon used redhat 5.2 and postgres for critical systems in 1998…. 5.2 was released in nov 1998, no company in thier right mind would deploy critical systems on an untested os the minute it is released. Also no company would deploy mission critical systems on postgres.
Now I know you are full of it.
“. 5.2 was released in nov 1998, no company in thier right mind would deploy critical systems on an untested os the minute it is released. Also no company would deploy mission critical systems on postgres. ”
ya sure as if the difference between 5.0 and 5.2 was that big. go to hell
ya sure as if the difference between 5.0 and 5.2 was that big
5.0 used the 2.0.32 kernel and 5.2 used the 2.0.36 kernel. Also i believe the glibc versions were different. Those are big differences for most companies and would require a extensive test on non production systems before deployment.
Looking at Amazon’s Oracle deployment, they spent 2 years testing it before deploying it. Like I said before, I seriously doubt you have worked in the industry for 10 years and have linux experience. Your attitude and knowledge of how things work seem to contradict your claim.