“Apple’s quaint choice of a microkernel for Mac OS X means Linux will lead in performance on Mac hardware.[…] BeOS implements its TCP stack in a user process, microkernel-style, and so does QNX. Both have notoriously slow TCP stacks.” Read the article at LinuxJournal. Our Take: Oh, yeah, this is why Be rewrote the whole networking stack with many of its parts living in the kernel space and named the project “BONE”. As for MacOSX lagging behind Linux, we should not be forgetting that Apple announced that MacOSX will sync with FreeBSD 4.x for MacOSX 10.2 while it will also sync with the next generation FreeBSD, 5.x, next year. Technically-speaking, FreeBSD 5.x is one of the most advanced operating systems one can find today (or tomorrow :).
take GNU+linux2.5.x and isn’t it as advanced as the current unstable branch of freeBSD?
what about 2.6 then ?
I don’t know why you say that thing, is like to say that winXXXP or whatever the advertizoids will call the new winNT 6.9 is now the most advanced… (I hope that will never appear…)
The article might seem inflamatory, but its true. Morso than almost any other task, networking brings the microkernel model to its knees. For an idea of why, think of the sheer number of packets and average networking interface has to send. For a 100 mbps network connection, a packet has to be sent once every 120 microseconds, or about 100 times more often than commands have to be sent to the average hard disk. For gigE, it becomes even worse, about 12 microseconds per packet. It is hard to batch ethernet packets, because that effects the connection latency. As a result, applications have to do a lot more communication with the OS to keep the packets flowing. In a microkernel system, this communication has a lot of overhead. OTOH, most home users don’t have gigE, and modern hardware can handle 100mbps reasonably well, even on a microkernel. As for the other parts of the OS X architecture, the article’s true for only a little while. To extend the original post, the current version of OS X is based on 4.4 BSD lite2 with stuff (networking, user tools) thrown in from various BSDs. Moving up to FreeBSD 4.X will make things much nicer. Still, I have a hard time understanding why Apple doesn’t just ditch Mach and move everything to a pure FreeBSD system. Since the OS X system server is monolithic anyway, and runs in kernel space, Apple doesn’t get ANYTHING from the microkernel design, except the performance hit of Mach.
> Why take GNU+linux2.5.x and isn’t it as advanced as the current unstable branch of freeBSD?
Yes, I believe that FreeBSD 5.x is more advanced than Linux 2.5.x
As for WinXP, yes, I believe that WindowsXP/.NET_Server, Solaris, HP-UX and FreeBSD 5.x are *the* advanced OSes. More than the rest OSes are.
But this is just my personal belief.
“Yes, I believe that FreeBSD 5.x is more advanced than Linux 2.5.x”
Any reasons for this, or is it more a eugenia-gut-feeling kind of thing? @_@
“take GNU+linux2.5.x and isn’t it as advanced as the current unstable branch of freeBSD?
what about 2.6 then”
It depends. In many areas, Linux is probably as advanced as FreeBSD. But there are a couple of areas where Linux seriously lags FreeBSD. The main area that comes to mind is the process scheduler. The Linux process scheduler is relatively lame when compared to FreeBSD’s process scheduler. This is why FreeBSD has an edge on high end servers. The Linux process scheduler tends to have problems juggling very high numbers of concurent processes.
Another area where I would say that Linux is lagging FreeBSD is memory management and possibly security options. FreeBSD 5.0 will implement Solaris style access control lists (ACL) ASFAIK, Linux doesn’t have access control that allows the flexibility of ACL.
Linux network performance is a joke. No, seriously. Unpatched and without being heavily modified, it blows. It’s barely acceptable in the arena of modern operating systems.
Network performance is a solved game. All OS writers of high calibre know *how* to get good performance. It’s just a matter of making those sacrifices to the degradation of the rest of the system.
QNX, bad performance? It’s a real time system – both variants tnat are out there now. Or is this a discussion of graphical responsiveness? And I don’t ask that question lightly – it’s a fair one.
Are we judging these OS implementations on their GUI responsiveness? ‘Cus the ones that are being lambasted have GUI built into the OS. When your GUI sits on top of the system, not part of it – it’s not going to be as affected by the vagaries of under-the-hood behavior.
If the argument is that we don’t have any other way to gauge, then I’ll just shake my head. ‘Cuz QNX isn’t a slow OS, and it’s networking stack is a modified BSD stack. The leaders of the industry. The cats meow!
And I’m not saying the article is wrong, btw – I own a TiBook, and w/o jaguar it’s slow. I just want to know what the criteria of this argument is.
Peace,
me
“Linux network performance is a joke. No, seriously. Unpatched and without being heavily modified, it blows. It’s barely acceptable in the arena of modern operating systems. ”
That’s a good point. And one I forgot to mention. The Berkeley TCP/IP stack is the best networking implementation in the world. That’s why so many commercial operating systems (including Windows) use various parts of the Berkeley TCP/IP stack. Linux chose to write its own TCP/IP stack though, and the result is pretty inferior compared to the Berkeley TCP/IP implementation.
Strangely I’m happy with gentoo
I’d like to see numbers anyway, if we look at the uptimes we can see that BSD and IRIX rules (from netcraft stats)
Is IRIX so great so?
BTW what scheduler are you talking about? there are some in developement just now I think…
and about ACL there is support for them….
http://www.cs.nmsu.edu/~lking/mach.html
“Mach 3.0 was originally conceived as a simple, extensible, communications microkernel. It is capable of running standalone, with other traditional operating system services such as I/O, file systems, and networking stacks running as user-mode servers.
However, in Mac OS X, Mach is linked with other kernel components into a single kernel address space. This is primarily for performance; it is much faster to make a direct call between linked components than it is to send messages or do RPC between separate tasks. This modular structure results in a more robust and extensible system than a large, purely monolithic kernel would allow, without the performance penalty of a pure microkernel.
Thus in Mac OS X, Mach is not primarily a communication hub between clients and servers…..”
Everything he said does not hold water..
There are not a single scale we can use to determine which OS is the “best” or the most “advanced”. How do we even define such a thing as an “advanced OS”? Let me give you all my little take on this.
WinXP (.NET server etc) offers a lot of services and functionality. Features such as .NET (CLR, C#, Toolkit, etc), fibers, powerful IPC, ACL, etc etc, is it’s trademark. There are more innovation that honing to perfection. The result is a rich OS. Microsoft intends to be revolutionary (I know many will scream and yell about this, just accept my view for the sake of argument here).
*BSD on the other hand takes their operating system and tries to perfect it. Few innovations are done, for the benefit of optimizing, tightening security, and polishing the system. I see BSD as concervative.
Linux tries to lie in the middle ground. It stays a unix like system, yet it adds freely from different sources. There are not central decision system and features are added alongside work of trying to optimize and improve the system. I feel that Linux is a constant reaction to the rest of the OSes, adding the features of others and trying to outperform everyone.
All these three can be claimed to be advanced, either on the account of richness, raw performace, security, etc. All can be said to not be advanced for the lack of the same reasons.
I realize the I didn’t reach any conclusion with this, but I hope to make those few who read this think a bit about it. I don’t feel that there is a system out there which is completely superior, I am still waiting for it. In the meantime I use XP.
I have at long last started to write down the ideas I have for a new OS (incredibly inspired by Dave Haynie, he is truly a god of computing), so maybe there will be something vastly supior in the future, and if I don’t make it I sure hope one of you can.
To all of you, may you live in interesting times!
AmigaDE…Java… just to name something a bit more developed and useful….
BTW what scheduler are you talking about? there are some in developement just now I think…”
Yes. There are some in development… But that’s called vaporware. The point is not what Linux’s process scheduler will be like in the future. The point is that at this point in time on this day, the FreeBSD process scheduler is far more advanced than the Linux process scheduler. Sure that may change in the future. But right now, it is true.
another one of those dumb “my unix is best” that precedes Linux and even precedes gnu. go listen to rms speak on the topic. at least he has a clue what he’s talking about, agree with him or not, and this fool who wrote this thing seems to be 10 years behind, not to mention a general lack of unix background and knowledge….what a waste. why does linux need to bash osx or visa versa. they both agree that redmond sux.
“Linux network performance is a joke. No, seriously. Unpatched and without being heavily modified, it blows. It’s barely acceptable in the arena of modern operating systems. ”
I understand that network stack was improved in 2.4 substantially over 2.2.
So is this info out of date? If not, where I can I see the proof?
Oh ya, sumba, lets not forget that Linux is more advanced than FBSD in some areas as well.
Doesn’t the linux scheduler do better with 4 cpu’s anyway? So it’s not a total loss.
And there’s Ingo’s new scheduler that certainly looks interesting.
But seriously, Mac OS X isn’t exactly known as a fast OS.
The linked article is an example of bad research. In several points, the author admits now knowing much about Darwin/OS X, yet still tries to explain how bad it is supposed to be.
I recommend to read this discussion (esp the reply from nibs) to see how much is wrong in that article:
http://arstechnica.infopop.net/OpenTopic/page?a=tpc&s=50009562&f=48…
mukernels are good for certain purposes, hybrids are good too
and monolithic ones too
the problems usually comes when the sorrounding (the rest of the os and the Applications) misbehaves.
win32 shown how things can be done for the worst
the rest of the os make it better.
I think that Quartz is the main cause of slowness as well as an poor optimized WM on top on a not well configured X (no matter if it is Xfree or not), not because of the kernels
(a good test should be run darwin+Xfree+fluxbox or any light WM and see if it runs faster or not.
– BTW what scheduler are you talking about? there are some in developement just now I think…
– Yes. There are some in development… But that’s called vaporware.
Ingo Molnar’s O(1) scheduler is in 2.5, and there is a patch for 2.4. I use it right now on a heavily loaded dual PII/350 : believe me, before it had just enough power to handle the load, now it’s got more than a few CPU cycles to spare.
The O(1) code is better at handling SMP systems. Well, to be fair, the original scheduler wasn’t known to scale well on SMP, but that one does.
That’s a good point too: OS X’ current bottleneck is definately not the kernel.
Anyone else have the feeling that this guy’s next article will be about how RISC chip designs “are mostly discredited now, however, because they have performance problems, and the benefits originally promised are a fantasy” ?
I am still really suprised by the amount of articles I read by supposedly smart people that are just rants trying to justify the fact that someone doesn’t want to buy a Mac or something.
The main area that comes to mind is the process scheduler. The Linux process scheduler is relatively lame when compared to FreeBSD’s process scheduler. This is why FreeBSD has an edge on high end servers. The Linux process scheduler tends to have problems juggling very high numbers of concurent processes.
FreeBSD has an “edge” on high end servers? Are you on crack? FreeBSD has abysmal SMP performance, typically on a level with Linux 2.0. This is largely due to the almost complete reliance on a giant, kernel encompassing spinlock. The elimination of this is one of the principle elements of the FreeBSD 5 effort (SMP NG).
FreeBSD 5.0 will implement Solaris style access control lists (ACL) ASFAIK, Linux doesn’t have access control that allows the flexibility of ACL.
Linux has ACL’s. Linux also has extended POSIX 1003.1e Capability bits and the NSA has a fully fleged MAC implementation available (and it’s going to be merged into 2.5 in the near future).
Another random idiot:
Linux network performance is a joke. No, seriously. Unpatched and without being heavily modified, it blows. It’s barely acceptable in the arena of modern operating systems.
Got any benchmarks to prove this or are you simply lying? The network stack is quite possibly one of the best parts of 2.4.
Simba added:
Linux chose to write its own TCP/IP stack though, and the result is pretty inferior compared to the Berkeley TCP/IP implementation.
Benchmarks? Proof?
http://www.samag.com/documents/s=1148/sam0107a/0107a.htm
Simba continued the spiral into ignorance with:
Yes. There are some in development… But that’s called vaporware.
Uh, Ingo’s new O(1) process scheduler is stable, technically advanced, extremely scalable and has been merged into 2.5. RedHat 7.3 and RedHat Advanced Server include a backport to 2.4 as well.
Seriously, anonymous dude, name calling isn’t called for.
And the nework stack *is* one of the best parts of the 2.4 kernel. Still sucks, though. Proof? What proof would satisfy you, and I’ll arrange for some. How about a detailed flow-trace analysis of the kernel activity when the machine is under load and network traffic is nearing/past saturation for the interfaces? I’m thinking of doing that one ‘cuz I want to test some distributed security software, and need to do it anyway. Oh, that’s right – I can’t get the data to you ‘cuz you’re a coward who posts anonymously.
Feh,
TLF
Culled this page from my bookmarks – use it to conduct your
own tests, everyone. Pissing contests and name calling aside, performing these tests and benchmarks while applying the theories will broaden the mind of the experimenter:
http://www.csm.ornl.gov/~dunigan/netperf/netlinks.html
Peace,
TLF
Random Idiot.
You people are _so_ wrong.
And I can’t believe the stuff Eugenia says.
I sure hope no one takes Eugenias word as holding some sort of value over people with actual knowledge about these things. 🙂
… too wrong, too often.
Still a good source of news though 🙂
Anyone notice a very anti-Linux streak with Eugenias postings? I sure do! It’s because Linux is so inferior and not as “advanced” as FreeBSD 5.x … haehae… 🙂
>Benchmarks? Proof?
You linked to a benchmark at Samag, where showed FreeBSD performing poorly against Linux. You should have searched on OSNews about the article you linked, because we had reported the news that Samag had REVISED their results of that benchmark later!!!!!
http://www.osnews.com/story.php?news_id=45
>Anyone notice a very anti-Linux streak with Eugenias postings?
No. If you ask the MacOS users, they will tell you how anti-Mac I am. If you ask the Linux people, they will tell you how anti-Linux I am. If you ask the BeOS ones, they will tell you that I am a traitor. And the list goes on.
Get over it. I write whatever I believe it is true in each specific *case*.
Henry writes “Anyone else have the feeling that this guy’s next article will be about how RISC chip designs “are mostly discredited now, however, because they have performance problems, and the benefits originally promised are a fantasy” ?”
LOL…Henry, are you just trolling, or are you really that ironically out of touch? RISC has been “mostly discredited now”! Nobody makes true RISC designs anymore.
Why? Performance problems!
RISC stands for “Reduced Instruction Set Computing”; the whole idea of RISC was to drastically simplify the CPU core design, which was in theory supposed to allow for much higher CPU frequencies. The higher frequencies are necessary in order to make up for the efficiency lost in reducing the number of instructions. The idea was that they could have most instructions execute in one clock cycle, and when more complex instructions were needed, they could compound simpler ones over several cycles. So in theory, RISC was supposed to be faster and cheaper than CISC.
What really happened? It’s history, folks — RISC designs never realized the high clock frequencies or the low prices that were needed to succeed. Old school chipmakers upped clock rates when RISC makers failed. And they found ways to utilize some elements of RISC technology, essentially beating the RISC crowd at their own game. In the end, so-called “RISC” chips are in name only, having increasingly complex designs to cope with lagging clock rates. So “the benefits originally promised are a fantasy” would be an apt description.
I am still really suprised by the amount of articles I read by supposedly smart people that are just rants trying to justify the fact that someone doesn’t want to buy a Mac or something.
So you claim. Without any proof to substantiate that claim, all you have is speculation. The Mac crowd has thus far been unable to discredit the article, and your ironic rant sure isn’t subjective. So I could speculate that Mac owners put out rants like yours to justify the fact that they spent a lot of money and have little to show for it. In short, denial. At least my supposition has some basis to it!
All of this can *only* be detected in pedantic benchmark tests; in practise, no person using any of these systems would be able to tell the difference under normal use (web browsing, email, IRC, etc).
I know that UC Berkeley’s IP stack was the first, but that doesn’t mean that it’s the best. Could be, but it’s not automatic.
Now why would a major OS vendor decide to use a freely available IP stack vs. paying for one? Why, oh why? Because it was the best? Possibly, although that isn’t exactly consistent with Microsoft’s “good enough” quality reputation. Could it be because it was free?
The Linux stack works just fine for me. I take vague claims with a grain of salt. If the complain’t isn’t even well defined, it usually ends up being a “between the ears” problem.
You linked to a benchmark at Samag, where showed FreeBSD performing poorly against Linux. You should have searched on OSNews about the article you linked, because we had reported the news that Samag had REVISED their results of that benchmark later!!!!!
http://www.osnews.com/story.php?news_id=45
And the benchmark still shows FreeBSD, tweaked, having inferior performance when compared to an untweaked Linux system. What exactly was your point again?
If you ask the MacOS users, they will tell you how anti-Mac I am. If you ask the Linux people, they will tell you how anti-Linux I am. If you ask the BeOS ones, they will tell you that I am a traitor. And the list goes on.
I can identify, LOL! Isn’t it funny that just about everyone who “takes sides” like that is nursing a persecution complex?
I tend to use Linux more often than not, but have plenty of respect for FreeBSD as an OS. Because of that, I would be interested in hearing what you consider advanced about FreeBSD. Have you covered this in the past?
As for MacOSX lagging behind Linux, we should not be forgetting that Apple announced that MacOSX will sync with FreeBSD 4.x for MacOSX 10.2 while it will also sync with the next generation FreeBSD, 5.x, next year.
Sorry but I don’t see the connection. If Apple releases versions at the same time as FreeBSD, this will make OSX better somehow? How? Right now, OSX is slow and FreeBSD is quick. Will OSX become quick as long as dot versions are released synchronously with FreeBSD? Sounds like superstition.
BeOS does have user process networking. I don’t think the article is attacking BeOS, though why not? The network stack is slow!
Has to be!
Miles lack of knowledge regarding what comprises Mac OS X is eerily similar to Speeds lack of knowledge regarding Mac OS X.
(wait I take that back–further into the comments on that Linux Journal page, Miles comes back and fesses up he didn’t know that Mac OS X was a hybrid – Speed would never do that)
Speed…what I’m getting here is basically what you’re saying. The battle has already been fought, why do we see a new editorial about it every day?
Ok people, calm down.
Stop all the my os is better than yours rants. Find something you like and use it. I started using FreeBSD over linux because I liked using /stand/sysinstall rather than keeping track of tarballs. That’s it.
Don’t get me wrong. FreeBSD’s installer is nowhere near as easy as say Mandrakes. All this arguing reinforces the belief of those that use windoze that unix is hard to use, hard to understand, and that you have to know a lot about computers to use it.
I use freebsd because I like unix. I use windoze because I need Photoshop and Dreamweaver. I don’t use a mac because I am poor, I don’t use linux/irix/aix/solaris/hp-ux because I already have a unix. If you don’t already have a unix, find one and learn it. But it will never be all things to all people. Don’t become like the mac zealots. Remember that every OS has good and bad points (some more than others).
Here is what the general public thinks:
Want your computer locked down? Run openbsd.
Want a pretty interface? Buy a mac.
Want an easy *BSD? Try FreeBSD.
Want to run a new OS on some arcane hardware? Netbsd.
Hate microsoft? Linux is for you.
Running a huge server? Solaris.
Really, Really, Really like IBM? AIX.
ok, now I’m starting to rant…
RISC: Reduced Instruction Set Computing/COMPLEXITY
CISC: The previous dominant design; coined after RISC.
Go get your facts straight, for Christ sake! RISC means that each of your instructions will execute on a single cycle, while a CISC instruction can take longer to complete. Thus, RISC designs leads to better instruction parallelism, better out-of-order execution, better virtualization, smaller core, etc, etc, etc…
Need some introductory literature?
http://users.utu.fi/t/tnurmi/ciscrisc.pdf
You WILL find that RISC machines have LOADS and LOADS of instructions, and this is DEFINITELY not what puts casual *low-end* RISC (PowerPC) lagging behind ALWAYS low-end x86. Proof? Alpha. POWER. SPARC.
No one does “pure” RISC design anymore? Okay, so suddenly there are only two chipmakers out there, Intel and AMD, right? Those are not pure RISC because of *LEGACY* x86 software. This is where the money is. No technical merits involved, period. Apart from the x86 translation unit, they *are* RISC. Their speed gains come from faster bus, DDR RAM and, of course, higher clock at the cost of longer pipelines. However…
Current bottleneck in PowerPC architectures is the system bus and RAM speed, *NOT* the *processor* clock. G3/G4 can crush numbers better than ANY x86 architecture out there, if combined megahertz are more or less leveled (I mean PPC clocked half of the x86 speed). Why? Number-crunching rarely leaves the MAIN processor cache, let alone secondary cache. Does this ring a bell? No? MD5. Some Photoshop filters. And what makes current Apple hardware (Xserve not included) hardware lag behind? Memory-intensive tasks.
Go troll somewhere else.
Also, vapor or not, check G5 when it’s ready. And check Xserve benchmarks when available, since Apple finally built a decent bus around the current PPC architecture.
meianoite, BD student of Computer Sciences
” What really happened? It’s history, folks — RISC designs never realized the high clock frequencies or the low prices that were needed to succeed. Old school chipmakers upped clock rates when RISC makers failed. And they found ways to utilize some elements of RISC technology, essentially beating the RISC crowd at their own game.”
Well gee, everybody except jealous weenies who refuse to buy a Mac admit that the 1 Gig G4 smokes the 2 gig P4 or the much-oversold Athlon 1.5. I would say RISC HAS fulfilled its promise. Particularly when you look at CHIP PRICE– the G4 has about a third as many transistors as the P4, if memory serves.
As for “allstar”: get a Mac.
Then you can run Photoshop and dreamweaver and W2K and BSD and LINUX all on ONE box. That’s something people seem to forget when they whine about Mac’s being $50 more expensive than some Dell piece of crap that will look ugly sitting on your desk and will crash every hour or so, while sending all your personal info straight back to Redmond.
I can’t get the data to you ‘cuz you’re a coward who posts anonymously
OT, but did YOU actually log on to OSNews? Ah, I see. So that means you’re are just as anonymous as the next person posting here!
-fooks
LOL…that last part is pretty funny!
“Really, Really, Really, Really, Really like IBM? VM/TSO.”
I can’t say for sure why, but I’m a big fan of ncurses. I liked Slackware’s setup program, and I like /stand/sysinstall too. Ports are nice in theory, but traversing all those directories becomes a pain, and if you’re behind a firewall it can become damn frustrating.
I’d like FreeBSD a lot better if it came with the GNU utilities by default, at least as an option. What they did at Berkeley was great for the time, but c’mon! Know what I mean?
Just a thought – OSX runs on machines with 66 or 100mhz FSB at the moment.
The majority of desktop *Nix OS’ are made for IA32 / x86 hardware, where the technology for 133, DDR 200 and 266mhz, and now 400mhz FSB hardware has been available some time.
Is it possible that at least some reviewers are comparing software performance in a situation where the hardware is not equal?
I just read the two posts below Allstar’s! Hilarious! I almost thought they were real, then I realized that anybody that stupid wouldn’t be able to write, thinking the keyboard was out to get them or something. A CS student who thinks RISC means “COMPLEXITY”? Too funny! The line that equated “CHIP PRICE” with transistors was the best. Imagine a place where you exchange goods and services for transistors! The material writes itself… Yeah, the SHOUTING of punchlines was a giveaway that the same person wrote both posts. Good work, whoever you are! Now that’s the kind of troll that I like!
~Seedy~, if a vendor sells hardware and software as a unit, it sinks or swims as a unit. Isn’t the whole point of testing to find out which system is better? If one system outperforms another, then the first system is the better of the two. If you rig the test to favor the weaker contender, what have you proved?
“The race goes to the swiftest” is how the saying goes. If you do away with the race, and turn it over to the lawyers, then you better not need to have the swiftest!
Hi all,
The FreeBSD updated story refers to 2 tests:
http://www.samag.com/documents/s=1147/sam0108q/0108q.htm“>Which
Please note the revised conclusion is that “a tuned version of FreeBSD was as fast as an untuned version of Linux, for connection levels of 1500 sends or fewer, with FreeBSD performance declining steadily at simultaneous connection levels above 1500.”
That doesn’t appear like a result that suggests FreeBSD is more advanced. For real users in real life, the results strongly suggest that both OS’s benefit if the sysadmin reads up on tuning for high performance (doh!).
Just a question: does the people who attack micro-kernels raed documentations about modern micro-kernels before saying such things ?
MacOS X use a Mach kernel, Mach is a first generation micro-kernel, and even if there are very interesting things in Mach, it is known to be very slow.
Just read some research people of L4 have done: http://l4ka.org/publications/. You’ll see that there is _a lot_ of way to greatly improve IPC performance, and IPC is the major CPU sink in micro-kernels based systems. Read at least http://os.inf.tu-dresden.de/pubs/sosp97/ and “towards real micro-kernels” before spreading such _false_ things.
And even if there is a _very small_ performance overhead, the advantages of using a micro-kernel are many, in all domains: security (just look at the Hurd security model), capatibilities (having a real VFS layer with translators allow many things to be done), stability, …
And on the performance, if some things may be a little slower, other will be far faster. Just look to the enormous overhead of things like kioslave ou gnome-vfs which are totally obsolete in a system like GNU, or the performance of IPC that can be done in classical Unix system (SysV IPC, pipes, signal, sockets) and the IPC that can be done in an L4-based system…
>t depends. In many areas, Linux is probably as advanced as >FreeBSD. But there are a couple of areas where Linux >seriously lags FreeBSD. The main area that comes to mind is >the process scheduler. The Linux process scheduler is >
>relatively lame when compared to FreeBSD’s process >
Which is why they replaced it with Ingo Molnars new O(n)
scheduler. RH also includes it in their new kernels.
>scheduler. This is why FreeBSD has an edge on high end >
>servers. The Linux process scheduler tends to have problems
>juggling very high numbers of concurent processes.
>Another area where I would say that Linux is lagging >
>FreeBSD is memory management and possibly security
True. Although NSA have made SELinux, with LOTS of security/ access changes.. The VM issue is still valid though.
>options. FreeBSD 5.0 will implement Solaris style access >
>control lists (ACL) ASFAIK, Linux doesn’t have access >
>ontrol that allows the flexibility of ACL.
The XFS filesystems from SGI have it, and http://acl.bestbits.at
implements it for ext2/3. ReiserFS 4 will also have it.
None are in the vanilla kernel yet, but will probably be
merged in a not so distant future.
“Well gee, everybody except jealous weenies who refuse to buy a Mac admit that the 1 Gig G4 smokes the 2 gig P4 or the much-oversold Athlon 1.5. I would say RISC HAS fulfilled its promise.”
Yeah gee anyone .. except those that run software .. or time how long it takes to do something.
Glenn
Random idiot:
Proof? What proof would satisfy you, and I’ll arrange for some. How about a detailed flow-trace analysis of the kernel activity when the machine is under load and network traffic is nearing/past saturation for the interfaces? I’m thinking of doing that one ‘cuz I want to test some distributed security software, and need to do it anyway. Oh, that’s right – I can’t get the data to you ‘cuz you’re a coward who posts anonymously.
Then post it here. I’m sure we’d all love to see it. Otherwise, stop lying about things you can’t prove.
Eugeina spluttered:
You linked to a benchmark at Samag, where showed FreeBSD performing poorly against Linux. You should have searched on OSNews about the article you linked, because we had reported the news that Samag had REVISED their results of that benchmark later!!!!!
Well done. FreeBSD still had it’s arse handed to it (despite being tuned as much as possible), largely because, as I said earlier, it’s scalability is truly dire.
“Well gee, everybody except jealous weenies who refuse to buy a Mac admit that the 1 Gig G4 smokes the 2 gig P4 or the much-oversold Athlon 1.5. I would say RISC HAS fulfilled its promise.”
Okay, now, you want to compare a dual 1 GHz G4, the highend of G4s, with 2 GHz Pentium 4, not the fastest out there, or Athlon XP 1500+, which was discountinued (lowest end is 1600+ now). Spells “fair benchmark” all over it, right?
Yeah gee anyone .. except those that run software .. or time how long it takes to do something.
Those would be lifeless geeks.
Please note the revised conclusion is that “a tuned >version of FreeBSD was as fast as an untuned version of Linux, for connection levels of 1500 sends or fewer, with FreeBSD performance declining steadily at simultaneous connection levels above 1500.”
>That doesn’t appear like a result that suggests FreeBSD is more advanced. For real users in real life, the results strongly suggest that both OS’s benefit if the sysadmin reads up on tuning for high performance (doh!).
Matt Dillon (the guy who wrote the ‘tuning’ man page refered to in the article) explained the falloff at
1500 connections (I saved the email somewhere, unfortunately I can’t find it). Basically it goes like this: each network connection has a certain amount of buffer space allocated for it. As the number of connections * the buffer space gets closer to the amount of available memory, the performance starts to fall off. If you want to accept more connections, reduce the amount of buffer space per connection (or free more ram).
Not terribly hard, it’s a shame the people doing the test couldn’t figure it out.
–Jon
Micro v Monolithic
Old discussion. Both have pros and cons, and most modern OS designs are hybrids these days.
Micro Kernels have a cleaner design, are easier to write and maintain due to their modular construction. However, if they have memory protection and/or virtual memory, then there are issues relating to performance that are hard to avoid.
Monolithic Kernels can be better where memory protection and/or virtual memory exist, in performance terms, as moving data about is simpler (often no actual movement is needed).
The actual issues are not so much the relative merits of the two approaches (on that score micro kernels *seem* a better design), but on how the two approaches handle the overheads memory protection and virtual memory place on an OS. On that score, it is easier for a monolitic kernel to sidestep the problem.
And tangetially related…
Discussing the relative merits of desktop QNX is a bit silly, QNX *has* a desktop, yes, but it is not and was never designed to be, a desktop OS. It is a real-time OS for embedded systems, and on that count a micro kernel is much more sensible and flexible. On the desktop, the issues are less relevant, and a monolithic kernel is as reasonable a choice as any (though with modern hardware requirements, monolithic kernels need some kludges to enable more dynamic driver handling – not a major issue however).
Well, people sometimes can’t notice dashes, can they?
Do you know that acronyms “evolve” according to their popular uses? RISC meaning is used in reference to both the meanings I mentioned, and notice that they’re not mutually exclusive.
Again: Go get your facts straight, troll. And I’m not the same person you mentioned; I was just too tired to use UBB tags. And you actually don’t deserve any tidyness from those who reply to your stupidities.
“I understand that network stack was improved in 2.4 substantially over 2.2.”
Yes. It did improve substantially. But it still falls short of the performance and robustness of the Berkeley TCP/IP stack. Its difficult to write a good TCP/IP stack, and I’m not sure why Linux decided to reinvent the wheel instead of use the Berkeley stack. After all, most of the GNU utilities are ports of the BSD utilities.
“Doesn’t the linux scheduler do better with 4 cpu’s anyway? So it’s not a total loss.”
FreeBSD 5.0 blew past Linux in SMP performance. The main reason is that FreeBSD 5.0 got a major boost from the newly integrated BSDI SMP code. Most of FreeBSD’s SMP code was rewritten and replaced with the BSDI code for 5.0.
Most of FreeBSD’s SMP code was rewritten and replaced with the BSDI code for 5.0.
Was rewritten? Last I heard, it’s still in the process of being rewritten.
Taking the last stable versions of both Linux and FreeBSD, Linux blows past FreeBSD in SMP performance.
Here we observe the species Linuxus Zealotous engaging in their typical flaming temper tantrum behavior because they were exposed to the fact that their OS is inferior to FreeBSD in many ways.
FreeBSD has an “edge” on high end servers? Are you on crack? FreeBSD has abysmal SMP performance, typically on a level with Linux 2.0.
Nice try. Problem is x86 hardware doesn’t scale well, so people that need serious SMP performance aren’t using x86 hardware anyway. But Apache benchmarks show that FreeBSD can handle roughly 30% more hits per second than Linux can.
“Linux chose to write its own TCP/IP stack though, and the result is pretty inferior compared to the Berkeley TCP/IP implementation.
Benchmarks? Proof?”
This is called common industy knowledge. If I have to prove this to you, then it just proves you know very little about the networking industry. Even Microsoft ackonwledges that the Berkeley networking stack is the best in the world. Much of the Berkeley stack is used in Windows 2000 and Windows XP. And this the company that can’t stand open source.
Also, your link to the SA Mag article proves how little you even know about FreeBSD and how little research you did. SA Mag later acknowledged that tehy didn’t know what the hell they were doing with FreeBSD. They hadn’t even enabled soft updates on the file system. Even a neophyte FreeBSD user reading this article could tell that Sys Admin didn’t do their homework and didn’t know what the hell they were doing with FreeBSD. Sys Admin got hundreds of emails and redid this benchmark after people had told them how to properly tune FreeBSD.
“Uh, Ingo’s new O(1) process scheduler is stable, technically advanced, extremely scalable and has been merged into 2.5.”
Nice try. 2.5 is called a developer release. 2.5 is not intended for production use, and 2.5 NEVER will be in production use. Perhaps you are not ware that odd numbered kernels don’t get placed into production? They are developer versions for playing with new features. As far as it being merged into 2.4, I hae yet to see proof that it is stable. Give me evidence that it is stable. And don’t give me some SysAdmin article that Sys Admin itself acknowledged was crap because they didn’t know what they were doing with FreeBSD.
I don’t think Speed is stupid. I think he/she is a troll. In another thread, he is trying to tell me that no one is using Java. After I presented him with an Evan’s Data Corp survey that said over 50% of US programmers were using Java, 78% of IT execs said Java is playing or will play an important role in their web application strategy, and gave him a list of 49 major companies using Java in major ways and–many of them Fortune 500 companies, he proceeded to tell me I hadn’t presented him with any evidence that Java is being widely used.
“Was rewritten? Last I heard, it’s still in the process of being rewritten.”
I could be wrong, but I am pretty sure that the SMP rewrite is mostly complete. FreeBSD 5.0 should be released in November, and I think that the rewrite is finished on the SMP code. I could be wrong about that though.
After all, most of the GNU utilities are ports of the BSD utilities.
Uh, no they aren’t. The GNU utilities are completely new implementations which are, more often than not, significantly better than their functionally anemic BSD counterparts.
Nice try. Problem is x86 hardware doesn’t scale well, so people that need serious SMP performance aren’t using x86 hardware anyway. But Apache benchmarks show that FreeBSD can handle roughly 30% more hits per second than Linux can.
Did you even read what I wrote?
First of all neither OS is limited to x86 only (though FreeBSD has far more limited cross-platform support).
Secondly, the technical inferiority of FreeBSD 4.x’s SMP implementation is well know. It lacks a multithreaded network stack. It lacks fine grained locking. It just doesn’t scale as well as the competition.
This is called common industy knowledge. If I have to prove this to you, then it just proves you know very little about the networking industry. Even Microsoft ackonwledges that the Berkeley networking stack is the best in the world. Much of the Berkeley stack is used in Windows 2000 and Windows XP. And this the company that can’t stand open source.
This is a pretty pathetic thing to say. “Oh, it’s common knowledge so I don’t have to prove it to you” is the sign of a clueless individual. If it’s common knowledge (hah!) surely there will be lots of benchmarks showing that? Or are you simply a FreeBSD fanboi who can’t quite bring himself to grasp the truth?
FYI, Microsoft went for a ground up implementation in NT/2K/XP. Unless you can prove otherwise (such as showing me where in NT there is a “contains BSD code” style passage, as demanded by the BSD license), I’ll chalk this up as another piece of deluded ranting from a FreeBSD fanboi.
Also, your link to the SA Mag article proves how little you even know about FreeBSD and how little research you did. SA Mag later acknowledged that tehy didn’t know what the hell they were doing with FreeBSD. They hadn’t even enabled soft updates on the file system. Even a neophyte FreeBSD user reading this article could tell that Sys Admin didn’t do their homework and didn’t know what the hell they were doing with FreeBSD. Sys Admin got hundreds of emails and redid this benchmark after people had told them how to properly tune FreeBSD.
Well done. Guess what? FreeBSD still got spanked in the redid benchmarks, despite being manually tuned far more than the other OS’s.
Nice try. 2.5 is called a developer release. 2.5 is not intended for production use, and 2.5 NEVER will be in production use.
I know 2.5 is the development kernel. Isn’t FreeBSD 5 still in development? 2.5 will eventually become 2.6/3.0 (and lots of stuff gets backported to 2.4 as well) So why the, uh, interesting double standard?
As far as it being merged into 2.4, I hae yet to see proof that it is stable. Give me evidence that it is stable.
Did you read my post? Apparently not. RedHat 7.3 and RedHat Advanced server ship with the O(1) scheduler backported to 2.4. If you know how stringent RedHats QA process is you’ll understand how this is a glowing endorsement of it’s stability.
I could be wrong, but I am pretty sure that the SMP rewrite is mostly complete.
Like most of your other posts in this thread, you are wrong.
http://www.freebsd.org/smp/#status
As a former Linux user, and a present FreeBSD one, I haven’t seen yet what makes the BSD utilities the anemic counterparts of the GNU ones.
One day, I’d love to take a bunch of you people, sit you down in front of a bunch of PCs with different OSes, but all running browser, email and IRC clients with a GUI that looks *exactly* the same across platforms, and challenge you to spot the difference between the TCP/IP stack speeds.
Not a single person alive would be able to tell the difference in normal use, ie. it’s sorta irrelevant.
However, in Mac OS X, Mach is linked with other kernel components into a single kernel address space. This is primarily for performance; it is much faster to make a direct call between linked components than it is to send messages or do RPC between separate tasks. This modular structure results in a more robust and extensible system than a large, purely monolithic kernel would allow, without the performance penalty of a pure microkernel.
>>>>>>>>>>>>>>
This was on Apple’s website. It’s bullshit. Seperating everything into seperate tasks doesn’t make the kernel more “robust” or “extensible.” All that gains is modularity, which can be had from Linux loadable modules or FreeBSD KLD’s. Nobody writes “monolithic” kernels anymore. These days, every thing is a small core, and other stuff is loaded as modules. What’s important is the source organization. If all the source code is independent and modular, it doesn’t matter if everything is linked to one binary at the end. As for performance, it’s worse than a pure monolithic kernel, because even though everything is in the same address space, message passing still must happen between processes. (Think about it, if you want to send the filesystem process a command to write some file, you have to send a message. Calling into its code keeps everything in the context of the calling thread, which makes it the exact same as a monolithic kernel.)
“Uh, no they aren’t. The GNU utilities are completely new implementations which are, more often than not, significantly better than their functionally anemic BSD counterparts.”
Yes, they are. Do your homework about the history of Linux. Why do you think Linux uses BSD print utilities instead of Sys V ones? Because they wer ported from BSD. Most of the common utilities are ports of BSD utilities, although some of them added enhancements.
“FYI, Microsoft went for a ground up implementation in NT/2K/XP. Unless you can prove otherwise (such as showing me where in NT there is a “contains BSD code” style passage, as demanded by the BSD license), I’ll chalk this up as another piece of deluded ranting from a FreeBSD fanboi.”
The BSD licence only requires that the original copyright be maintained in the source code. So basically, products can use BSD and no one who doesn’t have the source code will ever know.
There was an article that ran awhile back (I will let you know as soon as I find it) about hackers who had proven that Windows 2000 was using the Berkeley network stack by anaylizing the signatures it returns when probed with packet sniffers. Various network requests to return the “Regents of the University of California” signature. Microsoft ackowledged after this article that yes, Windows 2000 is using parts of the Berkeley Networking stack. In fact, there was speculation that this is one reason why Microsoft is supporing FreeBSD with .NET and C# is because Microsoft is using BSD code in Windows. Like I said, I will find the article for you and let you know where it is, but I’m surprised you don’t know about it for someone who seems to be claiming they follow this industry.
“Did you read my post? Apparently not. RedHat 7.3 and RedHat Advanced server ship with the O(1) scheduler backported to 2.4. If you know how stringent RedHats QA process is you’ll understand how this is a glowing endorsement of it’s stability.”
Yeah… Red Hat Linux is a real model of glowing stability and QA. That’s why both Red Hat 7.1 and 7.2 shipped with a broken version of GCC right? The fact that Red Hat is using it doesn’t prove it is stable. After all, GCC has specifically recommended against the use of the version that Red Hat shipped with 7.1 and 7.2. That didn’t stop them from doing it anyway.
“Well done. Guess what? FreeBSD still got spanked in the redid benchmarks, despite being manually tuned far more than the other OS’s.”
You must be reading a different study than I am. Because FreeBSD didn’t get spanked from what I can tell.
“Like most of your other posts in this thread, you are wrong.
http://www.freebsd.org/smp/#status“
If you followed FreeBSD at all you would know that a lot of this page is higly outdated. For example, the announcement that the SMP code in 5.0 will be destabalized is over a year old.
“As a former Linux user, and a present FreeBSD one, I haven’t seen yet what makes the BSD utilities the anemic counterparts of the GNU ones.”
Thats because there aren’t very many cases where this is true. Linux zealots like to seriously exagerate this claim. And it is based on some very minor things that almost no one uses anyway, and which can almost always be gotten around with pipes to other commands if you do need to do it.
For examle, GNU find supports an option that prevents it from transversing a file system boundary. BSD find doesn’t have that option. But how many times do you use an option like that anyway?
Anonymous: Put your money where your mouth is. I bet you can’t think of many examples of “animic BSD utilities” other than the one I just gave you, which is pretty much a non-issue anyway.
And like I said, for the most part, there is nothing that the GNU utilities can do that can’t be done with a pipe in BSD utilities. But of course, the typical Linux user is a UNIX admin wannabe who wouldn’t be able to figure out how to build a pipe to do it since they don’t even know what most of the commands in /usr/bin do, or probably even that half of them exist. For example, I bet the average Linux user doesn’t have a clue what the command “tr” does. And I bet even fewer could use awk to do something as basic as print a text file to stdout.
By the way… As far as I am concerned, that option to GNU find is called “bloat”. It’s a feature that almost no one ever uses, and all it does is increase the size of the binary and increase its startup time. That might not seem like a huge issue for interactive use, but when you have a shell script that calls find multiple times for example, it can add up.
And so far, I’ve never once needed to prevent find from transversing a file system boundary. But if I did need to, I could use a pipe to find to stop it from crossing a boundry.
Bah! Thats a lot of technical crap to
explain why Linux is better – – BUT ITS
NOT.
If Linux is SOOO much better then why
doesn’t the free-as-in-stallman crowd
have a linux distro thats as easy to use as
OS X? Why is Auqa, even though
MUCH younger than X Windows/XFree
make X Windows look like sh!t??? But
giving the opinion on kernel performance
the benefit of the doubt . . . I would
rather run a MODERN high performance
COMPLETE easy to use GUI (Aqua) on
a SLIGHTLY slower kernel than use a hideouse INCOMPLETE half-baked inconsistant
GUI (X Windows with ANY AVAILABLE WM) on a SLIGHTLY faster kernel that has
archaic driver issues.
Got it Linux loosers . . . dump X Windows and then you can come play.
Most of the common utilities are ports of BSD utilities, although some of them added enhancements.
“port” implies taking the original code and modifying it for another platform. The GNU utilities are ground-up rewrites.
And it looks like you need to do your homework about the origins of the GNU project.
The BSD licence only requires that the original copyright be maintained in the source code. So basically, products can use BSD and no one who doesn’t have the source code will ever know.
That wasn’t true until about a year back. See Clause 2:
“2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.”
There was an article that ran awhile back (I will let you know as soon as I find it) about hackers who had proven that Windows 2000 was using the Berkeley network stack by anaylizing the signatures it returns when probed with packet sniffers.
There was an old version of nmap which did that, which I believe is what you’re refering to. It was actually a bug.
Microsoft ackowledged after this article that yes, Windows 2000 is using parts of the Berkeley Networking stack.
I’d love to see the press release for that, given that they’d be in breach of the BSD license (remember, this was before the advertising clause change).
…
Oh, wait, you aren’t talking about the API are you?
@_@
In fact, there was speculation that this is one reason why Microsoft is supporing FreeBSD with .NET and C# is because Microsoft is using BSD code in Windows.
Huh? That is a complete non-sequitur. There are two reasons why Microsoft have release a piss-poor CLI implementation for FreeBSd
– to appease the DOJ
– because they hate the GPL and think (rightly) the FreeBSD is the smallest threat to Windows.
Red Hat Linux is a real model of glowing stability and QA.
Do you know the QA process RedHat put their kernels through? Evidently not. I suggest that you do some research first, before uttering these outright lies.
That’s why both Red Hat 7.1 and 7.2 shipped with a broken version of GCC right?
There are a couple of reasons why RedHat took branched and stabilised GCC3.0
– IA64 and S/390 support was required, GCC2.95 was unable to provide this, so the only option was the GCC3.0 branch
– GCC2.95 has pretty poor C++ support
etc.
for more, see http://www.bero.org/gcc296.html
You must be reading a different study than I am. Because FreeBSD didn’t get spanked from what I can tell.
http://www.samag.com/documents/s=1147/sam0108q/
(thats the followup with all the tweaks applied to FreeBSD and the other OS’s left as they were)
At 1500 simultaneous sends, FreeBSDs’ performance goes into a nose dive. And this is supposed to be scalable?
Uh huh.
If you followed FreeBSD at all you would know that a lot of this page is higly outdated. For example, the announcement that the SMP code in 5.0 will be destabalized is over a year old.
The pertinent bits – the status reports – appear to be regularly updated. Unless I’ve magically jumped forward in time, and 23rd May 2002 was actually six months ago.
“- to appease the DOJ”
I don’t think this played a role in it… But…
“- because they hate the GPL and think (rightly) the FreeBSD is the smallest threat to Windows”
At least we agree on something. Microsoft’s main reason for .NET in FreeBSD is to splinter the open source community. At this point, Linux is the bigger threat to them. Will .NET change that and make FreeBSD become the dominant server platform over Linux? Probably not. But on the off chance that it does, then we will probably see Microsoft drop .NET for FreeBSD if FreeBSD becomes a serious threat to Win XP in the server arena.
From what I can tell right now though, .NET is not going to revolutionize the way companies do business on the Internet. 78% of IT execs polled in a recent survey said they plan to use J2EE in their internet business framework. Only 22% said they plan to use .NET. I believe this was the infamous survey where the company that ran it caught systems with IP addresses registered to microsoft.com engaged in poll tampering by automatically voting for .NET and such (they threw out a large number of duplicate votes that came from the same systems within the Microsoft domain.)
“There are a couple of reasons why RedHat took branched and stabilised GCC3.0”
Yeah. But the problem is they broke the x86 version of Red Hat for the most part. And x86 is their major customer. the version of GCC that shipped with 7.1 and 7.2 couldn’t even compile a new kernel out of the box. Red Hat recieved hundreds of complaints about this from their customers. And what did they do? They completely ignored them! And shipped the same broken compiler in 7.2! Like I said, the GCC project itself reccomended against using this compiler.
“At 1500 simultaneous sends, FreeBSDs’ performance goes into a nose dive. And this is supposed to be scalable?”
As someone else pointed out to you, this had to do with buffers and not with performance problems in FreeBSD.
“”port” implies taking the original code and modifying it for another platform. The GNU utilities are ground-up rewrites.”
And are in fact often poor imitations and with some of the worst documentation in the *nix world.
“Microsoft ackowledged after this article that yes, Windows 2000 is using parts of the Berkeley Networking stack.”
Not only was there an article about it, Microsoft does in fact still acknowledge it, as per the terms of the FreeBSD licence. It is also common knowledge throughout the FreeBSD community as anything more than a cursory browse of the main site would tell you. It’s no good stamping your new little penguin flippers and saying it ain’t just because you don’t want it to be.
” ….because they hate the GPL and think (rightly) the FreeBSD is the smallest threat to Windows”
Hating the GPL isn’t confined to the proprietary world you know. And anyway, FreeBSD doesn’t set itself up as a threat to Windows. Incidentally, you clearly don’t remember all the mud-slinging from the Linux “community” when it was indeed confirmed by Redmond that they were indeed using the BSD networking stack. “Traitors” was just one of the unpleasant epithets being bandied around as I recall.
“Do you know the QA process RedHat put their kernels through?”
Take a serious look at the /etc directory of the typical RH machine and then tell me it’s a model of QA. Amazingly they’ve managed to create something as bloated and disorganised as the Windows directory.
“At 1500 simultaneous sends, FreeBSDs’ performance goes into a nose dive. And this is supposed to be scalable?”
Ever seen a linux server dive because of the way it swap, swap, swap, swap swaps?
“The pertinent bits – the status reports – appear to be regularly updated. Unless I’ve magically jumped forward in time, and 23rd May 2002 was actually six months ago.”
Perhaps if you subscribe to some mailing lists…..
From what I can tell right now though, .NET is not going to revolutionize the way companies do business on the Internet. 78% of IT execs polled in a recent survey said they plan to use J2EE in their internet business framework. Only 22% said they plan to use .NET. I believe this was the infamous survey where the company that ran it caught systems with IP addresses registered to microsoft.com engaged in poll tampering by automatically voting for .NET and such (they threw out a large number of duplicate votes that came from the same systems within the Microsoft domain.)
We agree on that as well then I (we) use J2EE quite heavily and I see no good reason to switch to MS operating systems with .NET. Having said that, .NET is exactly what was needed to make sure Sun didn’t drop the ball. I suppose if you’re a Windows programmer, .NET is much nicer than the soggy decaying pile know as Win32 and MFC.
Yeah. But the problem is they broke the x86 version of Red Hat for the most part.
They didn’t, really. There were a few bits of software which didn’t work – but this was largely due to the reliance on buggy behaviour in GCC2.95.
the version of GCC that shipped with 7.1 and 7.2 couldn’t even compile a new kernel out of the box.
It could. The version of GCC included in 7.0 was less stable but the bugs were rapidly discovered and patched through RedHat update. The situation certainly wasn’t ideal, but the alternatives were hardly any better.
As someone else pointed out to you, this had to do with buffers and not with performance problems in FreeBSD.
Sorry, but it looks like a performance problem to me. If the OS is unable to autotune buffer allocations properly, something is seriously wrong.
And are in fact often poor imitations and with some of the worst documentation in the *nix world.
Poor imitations? O_O
Crack is bad for you, mmmkay?
Not only was there an article about it, Microsoft does in fact still acknowledge it, as per the terms of the FreeBSD licence. It is also common knowledge throughout the FreeBSD community as anything more than a cursory browse of the main site would tell you. It’s no good stamping your new little penguin flippers and saying it ain’t just because you don’t want it to be.
Links? Proof? Evidence? If it’s common knowledge then there must be something substantial somewhere. I’d be happy with Microsofts acknowledgement. Would you care to provide it?
Hating the GPL isn’t confined to the proprietary world you know.
Captain Obvious strikes again.
Incidentally, you clearly don’t remember all the mud-slinging from the Linux “community” when it was indeed confirmed by Redmond that they were indeed using the BSD networking stack.
You’re quite right. I don’t remember it. Would you care to provide some links?
FreeBSD doesn’t set itself up as a threat to Windows.
That smacks of a striking lack of ambition. If that is the case, with Linux rapidly eating the proprietary Unix market, and Microsoft attempting to do the same, where exactly do you see the BSD’s market?
Take a serious look at the /etc directory of the typical RH machine and then tell me it’s a model of QA.
Take a look at the kernel QA test. The week long cerberus runs. The multi-kilo line long regression tests. The corner-case testing. The organisation or otherwise of the /etc directory has no relevance to this whatsoever.
Learn, think, then post.
Ever seen a linux server dive because of the way it swap, swap, swap, swap swaps?
No, not since the early days of 2.4. We’re a long way from that now.
Perhaps if you subscribe to some mailing lists…..
So, you’re trying to tell me that this page, which was updated just over a week ago is somehow hideously out of date?
“Sorry, but it looks like a performance problem to me. If the OS is unable to autotune buffer allocations properly, something is seriously wrong.”
Whether autotuning buffer allocations is a good thing or not is probably a religious debate. On one hand, it can increase performance. On the other hand, it is a performance hit because autotuning requires monitoring overhead.
It’s probably on the same level as whether Java’s array boundary checking is a good thing or not, or whether garbage collection is a good thing or not. On one hand, it greatly reduces the occurance of a major source of programming bugs. On the other hand, array boundary checking during runtime, and keeping track of object references so that it can be determined when they can be destroyed is a performance hit.
Automatic performance tuning is the same thing. The code that monitor and tunes the buffers for example does have some performance overhead.
X is just good for the purpose, I think that somebody told you about the enlightenment (even e16…) or gtk themes or other WM thems/styles
take a look on sunshineonabag.co.uk
or kde-look.org
what isn’t appealing?
and what isn’t fast? I can compile stuff like mozilla an still use the GUI w/out problems (and play xmms and chat with ICQ and …. if is just a linux kernel I can just watch a divx on mplayer…) do that on another system and tell me
(linux-2.4.19patched Xfree86-dri(radeon) 4.2.0 athlon 850, 768mb ram)
Aqua looks mildly good for some and great for some else but for sure isn’t so fast… and on heavy cpu load may crawl…
Micro Kernels have a cleaner design, are easier to write and maintain due to their modular construction.
No, microkernels aren’t the bearer of modularity. That’s not a trait that distinguishes microkernels. Look at the monolithic Linux kernel that loads most of its code as modules today. You don’t need microkernels to do it.
Well, people sometimes can’t notice dashes, can they?
Then they have no business driving. Next!
After all, most of the GNU utilities are ports of the BSD utilities.
Wrong! The GNU utilities are significant improvements over the old Berkeley utilities, just like Edison’s electric light was a significant improvement over the electric lights of the day.
Problem is x86 hardware doesn’t scale well, so people that need serious SMP performance aren’t using x86 hardware anyway.
Wrong! It’s SMP that doesn’t scale well. Large-scale systems use stuff like NUMA. And there are IA-32 NUMA systems. You do know that SMP is not the end-all and be-all of multiprocessing, don’t you? Nah, I betcha don’t…
This is called common industy knowledge. If I have to prove this to you, then it just proves you know very little about the networking industry.
Heh heh. “I know the answer, I’m just not telling you!” Didn’t work in 4th grade either.
As a former Linux user, and a present FreeBSD one, I haven’t seen yet what makes the BSD utilities the anemic counterparts of the GNU ones.
Try using them — then you’ll find out.
In another thread, he is trying to tell me that no one is using Java.
Ooh…you built a man…and you made him of straw! Ooh! All I’m doing is asking you to produce something concrete to support your claim. And you’re unable to do so. You don’t need to misquote me to prove that you’re a liar. I already knew…
Today must be opposite day!
Every day is “opposite day” for most of the people here!
One day, I’d love to take a bunch of you people, sit you down in front of a bunch of PCs with different OSes, but all running browser, email and IRC clients with a GUI that looks *exactly* the same across platforms, and challenge you to spot the difference between the TCP/IP stack speeds.
Give that bunny a cigar!
Seperating everything into seperate tasks doesn’t make the kernel more “robust” or “extensible.” All that gains is modularity, which can be had from Linux loadable modules or FreeBSD KLD’s. Nobody writes “monolithic” kernels anymore. These days, every thing is a small core, and other stuff is loaded as modules. What’s important is the source organization.
Bingo! Another cigar!
I’ve come full circle. It’s nice to see some posts from informed, thinking people for a change. I take full credit for paving their way! All for now…
“Ooh…you built a man…and you made him of straw! Ooh! All I’m doing is asking you to produce something concrete to support your claim. And you’re unable to do so. You don’t need to misquote me to prove that you’re a liar. I already knew… ”
And what exactly do you consider concrete? To me, a survey from Evans Data Corp and a list of 49 major companies including many Fortune 500 companies is pretty concrete evidence that Java is widely used.
Now lets reverse your little game. Give me evidence that it is NOT widely used. What are those 49 companies lying? They aren’t really using Java? Give me a break.
It should be noted that most of the GNU utilities were written many, many years before 386BSD was released, so trying to claim that they’re ports of the BSD utilites is more than a little ridiculous.
“It should be noted that most of the GNU utilities were written many, many years before 386BSD was released, so trying to claim that they’re ports of the BSD utilites is more than a little ridiculous.”
386BSD doesn’t have anything to do with it. And Berkeley released the source for BSD as soon as they started working on it. Yes, the GNU utilities are mostly ports of the BSD ones. The source for the BSD utilities was availabe in the 70s.
I have a question about the RISC vs. CISC subthread. This is a matter of curiosity, not religion.
My understanding from my comp. arch. course back in ’92 (?maybe) was that until the advent of RISC computer makers tried to improve performance by adding more complicated instructions to a chip, increasing clockspeed, &c.
RISC took the opposite approach: it reduced the instruction set (initially anyway, unlike the G4 say where Apple boasted there were over 100 new instructions) to make room for other on-chip architecture to speed performance, things like:
– a huge number of registers
– on-chip cache
– pipelining
– multiple pipelines
& probably other stuff. (It’s been 10 years, gimme a break.)
Now, maybe CISC has “won” the performance thing. (I’m not conceding that; I’m assuming it for the sake of argument.) When I look at today’s CISC, I see:
– a huge number of registers (compared to 10 yrs ago)
– on-chip cache
– pipelining
– multiple pipelines
& probably other stuff. (I don’t follow it much.)
Am I wrong in concluding from this that the battle was “won” by CISC not because of RISC deficiencies, but because advances in technology simply made it possible for CISC to incorporate RISC tech on a smaller die?
Sorry for the length & confusion…
I’ll just respond to FreeBSD 5’s smp out doing linux 2.4’s smp code.
You’re saying brand new fbsd 5 code out does older 2.4 code?
Wow! That’s quite insightful there.
What’s interesting is that I believe linux 2.2 has better smp than FreeBSD 4.*. 2.4 certainly does.
So think about all the work that has gone and will go into FreeBSD 5’s smp, FreeBSD being the version 5 because it comes after 4, FreeBSD 5 is built off what was 4 right?
Now here we have linux 2.5, which is being built off of what it’s in 2.4. So maybe it makes sense to think that putting a lot of work into 2.5’s smp, which already has the much superior smp of 2.4 may possibly yield more impressive results over the work going into freebsd 5’s smp, which is being built off 4’s inferior work?
I love FreeBSD, it’s a great OS, but I’m sick of the zealouts. People complain about Linux zealouts but I’ve found that FreeBSD zealouts, general, are MUCH worse than linux zealouts, I also love linux but I can’t stand linux zealouts either. FreeBSD zealouts are almost as bad as the mac zealouts.
SMP on x86 hardware isn’t that great? This may not still be true. SMP with AthlonMP’s utilizes technology from alpha, old intel smp made so each cpu shared the same bus, with athlon smp (and thus alpha smp), each cpu gets it’s own bus, I believe it works this way for those new intel Xeons as well. I’m sure there are more issues than just buses but this is certainly a big issue with smp.
I’m well aware of the fact that there are no quad Athlon boards (yet?). But there are or will be quad boards for the new Xeons.
And if FreeBSD doesn’t scale up as well as linux on the hardware platform with crappy smp, then how well does freebsd scale up on a platform that has good smp??
Alright, now that I’m going, I’ll touch on a few more things:
1) I’m pretty sure 2k and XP use parts of FreeBSD’s networking stack. Not 100% sure but pretty sure. Maybe it’s just an API thing like you mentioned?
2) OS X will become fast because it will sync with FreeBSD X.Y? Well, FreeBSD isn’t on Mach 3.0 so it doesn’t have that problem (or advantage, whatever your opinion is). OS X is already based on an earlier version of FreeBSD, I’m sure if that version of FreeBSD ran on PPC, you’d see it was faster, whether it was for OS X being on Mach 3.0 and/or for other reasons.
3) Before I take a side on the mono vs micro debate, I’ll wait to see how hurd turns out (or how some other free microkernel if hurd never turns out), and see how that compares with other free monolithic kernels. Or even better, wait for the PPC FreeBSD port and see how it compares to OS X (but then you must take OS X’s sugar coated ui into account.) Though at this point it seems mixing ideas from the two models works out well.
4) this is just nitpicking I guess, but one advantage I see the GNU utils have over the utils in other unices is where you can place your options on the command line. The GNU stuff allows for a lot of flexibility here. For example, with GNU’s ls, you can do “ls -l /usr” AND “ls /usr -l” but with BSD ls (nad sys V too I think) you can only get away with “ls -l /usr”, try it the other way way and see what happens. Yes, I know this alone doesn’t make the GNU stuff better, but it makes life easier and as a matter of opinion, I prefer it this way.
5) I agree with anonymous 95-99%.
Simba,
Aha! Forgot about the BSDi part, sloppyness on my part.
But they’re still working up from FreeBSD 4.
“I have a question about the RISC vs. CISC subthread. This is a matter of curiosity, not religion.”
Oh. This is finally sounding positive.
“My understanding from my comp. arch. course back in ’92 (?maybe) was that until the advent of RISC computer makers tried to improve performance by adding more complicated instructions to a chip, increasing clockspeed, &c.”
Finally, someone who took some course! This is correct. They actually put opcodes for everything they saw fit, and those were translated into the actual machine code on runtime. This is done by what is called a “microprogram layer” on CISC processors. RISC processors don’t have such layer.
“RISC took the opposite approach: it reduced the instruction set”
Not exactly correct. They stripped the redundant instruction set functionality found on CISC processors. More instructions were added to make up for the stripped ones, but designers made sure those would complete in 1 clock cycle.
“(initially anyway, unlike the G4 say where Apple boasted there were over 100 new instructions)”
This is mostly for the vector processing unit, and it’s still RISC, as those instructions finish in 1 clock tick.
“to make room for other on-chip architecture to speed performance, things like:
– a huge number of registers
– on-chip cache”
– pipelining
– multiple pipelines
& probably other stuff. (It’s been 10 years, gimme a break.)”
So far, so good. Impressive, in fact.
“Now, maybe CISC has “won” the performance thing. (I’m not conceding that; I’m assuming it for the sake of argument.)”
CISC has not. Hybrid CISC/RISC has not. The x86 decode unit is still a bottleneck and will be for a long time. It may have “won” the performance thing on personal computing (which comprises AMD and Intel on the x86 side, and Motorola and IBM on the PPC side. I’m not counting Crusoe because it is clearly not meant for performance) for the time being, because the early G3s had design flaws and Motorola engineers are having a hard time with G4 MHz issues. Their manufacturing process was seriously broken, and Apple had to go after IBM for G3 parts. IBM doesn’t make “G4″s because of licensing costs regarding the vector engine technology, which was developed by Motorola.
“When I look at today’s CISC, I see:
– a huge number of registers (compared to 10 yrs ago)
– on-chip cache
– pipelining
– multiple pipelines
& probably other stuff. (I don’t follow it much.)”
Which means that they’ve turned their core into RISC. They had no much choice.
“Am I wrong in concluding from this that the battle was “won” by CISC not because of RISC deficiencies, but because advances in technology simply made it possible for CISC to incorporate RISC tech on a smaller die?”
Not entirely wrong. Their core *is* RISC, and it sports longer pipelines for MHz sake, and better buses for memory transfer sake. AMD would be in serious trouble if MHz were the only way out…
Because of more registers (go check how many registers a P4 has, and how many registers an Itanium is supposed to have) and longer pipeline, and other design tradeofs, x86 cores are *very* big and run *very* hot. PPCs are still *way* cooler, smaller, and better engineered in general. Try unplugging your Athlon fan and see what happens. It *melts*. The P4 disables itself after some time, so it won’t melt, bit it will fail. What about G4s? There are usually no fans, only a simple aluminium heatsink.
“Sorry for the length & confusion…”
No problem! We need messages from polite and educated people once in a while… =)
And sorry for some broken English on my previous messages.
Are you sure your reasoning makes any sense? You know, about SMP in Linux starting on the good base of 2.4 and in FreeBSD on the bad base of 4.x, hence SMP in Linux will always be better (that’s not what you wrote, but that’s what it means)? I suppose that if you start from nothing, you won’t get anything.
Why do these debates always turn into the same flamefests as CISC vs RISC ones do? I mean, the author even linked to a flamefest in the article (a very famous flamefest, but a flamefest all the same) between Linus Torvalds and Andy Tannembaum.
Apple had good reason to choose a mach microkenel, it has several benefits that do not work as well on a monolithic kernel. Monolithic kernels like Linux have benefits over microkernels. One of those is networking performance.
“Now here we have linux 2.5, which is being built off of what it’s in 2.4. So maybe it makes sense to think that putting a lot of work into 2.5’s smp, which already has the much superior smp of 2.4 may possibly yield more impressive results over the work going into freebsd 5’s smp, which is being built off 4’s inferior work?”
This is incorrect. FreeBSD 4.x SMP was broken because it relied on a “giant lock” structure for access to critical areas. FreeBSD 5 introduces modern, fine-grained access to those.
“I love FreeBSD, it’s a great OS, but I’m sick of the zealouts. People complain about Linux zealouts but I’ve found that FreeBSD zealouts, general, are MUCH worse than linux zealouts, I also love linux but I can’t stand linux zealouts either. FreeBSD zealouts are almost as bad as the mac zealouts.”
There are zealots for every conceivable subject on Earth. Mac zealots sometimes overreact because people don’t share their delight for the easy-to-use interface, logical approach, etc. I find Mac OS 9 visuals quite appealing IMHO, and haven’t used Mac OS X enough to say my final word about it. However, Darwin frameworks model is *impressive*, it’s way, *way* more logical and cleaner than current “stuff everything under /usr” approach. Linux zealots think wide support is more important than stability, and they don’t mind having their kernel patched once every week… I didn’t, back then. However current kernels are showing great improvement (thank God!). *BSD zealots have extreme focus on stability, and generally lack low-end hardware support. But this is changing fast (again, thank God!).
” 2) OS X will become fast because it will sync with FreeBSD X.Y? Well, FreeBSD isn’t on Mach 3.0 so it doesn’t have that problem (or advantage, whatever your opinion is). OS X is already based on an earlier version of FreeBSD, I’m sure if that version of FreeBSD ran on PPC, you’d see it was faster, whether it was for OS X being on Mach 3.0 and/or for other reasons.”
OS X *user land* will sync with FreeBSD. Not the kernel. Not that I’m aware of. Improvements will be in the stability area and POSIX compliancy(sp?), mostly. And, again, OS X is *not* on top of a pure Mach kernel. It’s a modular monolithic kernel, like BeOS was. It allows other code to run on kernelspace, much like the BeOS kernel add-ons model.
“Simba,
Aha! Forgot about the BSDi part, sloppyness on my part.
But they’re still working up from FreeBSD 4.”
Hey, it’s not that they cut and paste BSDi code into FreeBSD kernel! Granted, they modelled the new SMP against BSDi’s ideas and reused parts of BSDi code, and had (and are still having) to change a *lot* of dependencies on that giant lock thing. They actually had to restruccture the whole kernel! This is why it’s taking so long. And this major rewrite to the kernel will lead enormously better scalability, and I believe x86 dependencies are being stripped off too.
>CISC has not. Hybrid CISC/RISC has not. The x86 decode >unit is still a bottleneck and will be for a long time.
>It may have “won” the performance thing on personal >computing
How did it win on personal computing?
As you on yourself to point out x86 are now RISC cores…
CISC was comprehensively slaughtered by RISC. How many CISC chips are in production today?
On the desktop Zero.
For workstations/Servers Zero.
RISC didn’t need fancy tricks to speed up either, They have become very complex Out-Of-Order machines today but DEC was kicking everyone elses ass with the minimalist in-order Alpha 21064 and 21164, the last of the CISC chips (Pentium 1, K5, Cyrix) never even came close.
>I’m not counting Crusoe because it is clearly not
>meant for performance) for the time being,
It was originally but when it didn’t live up to expectations they sold it to the tried and tested “low power” market. However it’s neiter CISC or RISC, Like Itanium it’s VLIW (notably Intel screwed up on their first VLIW attempt also).
meianoite,
“This is incorrect. FreeBSD 4.x SMP was broken because it relied on a “giant lock” structure for access to critical areas. FreeBSD 5 introduces modern, fine-grained access to those.”
Ok, so they’re still building up from 4 with some BSDI stuff added right?
” There are zealots for every conceivable subject on Earth. Mac zealots sometimes overreact because people don’t share their delight for the easy-to-use interface, logical approach, etc. I find Mac OS 9 visuals quite appealing IMHO, and haven’t used Mac OS X enough to say my final word about it. However, Darwin frameworks model is *impressive*, it’s way, *way* more logical and cleaner than current “stuff everything under /usr” approach. Linux zealots think wide support is more important than stability, and they don’t mind having their kernel patched once every week… I didn’t, back then. However current kernels are showing great improvement (thank God!). *BSD zealots have extreme focus on stability, and generally lack low-end hardware support. But this is changing fast (again, thank God!).”
Well, that doesn’t change the fact that zealouts are irritating. But what I mean by zealout is someone who puts up a religious fight at any chance, merely being concerned with stability or whatever just makes you concerned about stability or whatever, not necessarily a zealout.
Maybe the term I should be using is
<Insert-platform-here> troll.
” OS X *user land* will sync with FreeBSD. Not the kernel. Not that I’m aware of. Improvements will be in the stability area and POSIX compliancy(sp?), mostly. And, again, OS X is *not* on top of a pure Mach kernel. It’s a modular monolithic kernel, like BeOS was. It allows other code to run on kernelspace, much like the BeOS kernel add-ons model. ”
Uh huh but at the top it says:
“As for MacOSX lagging behind Linux, we should not be forgetting that Apple announced that MacOSX will sync with FreeBSD 4.x for MacOSX 10.2 while it will also sync with the next generation FreeBSD, 5.x, next year.”
Maybe I phrased my response/question wrong.
My point was, how does this prevent OS X from “lagging behind linux” exactly? Not a troll either, real question.
This is how it’s been working already right? OS X is already (sorta) sync’ed up with an FBSD release. Did I misunderstand what Eugenia meant by lagging behind linux?
” Hey, it’s not that they cut and paste BSDi code into FreeBSD kernel! Granted, they modelled the new SMP against BSDi’s ideas and reused parts of BSDi code, and had (and are still having) to change a *lot* of dependencies on that giant lock thing. They actually had to restruccture the whole kernel! This is why it’s taking so long. And this major rewrite to the kernel will lead enormously better scalability, and I believe x86 dependencies are being stripped off too.”
Yeh, that was kinda my point, having to restructure like that seems like it would take a lot of work. Linux appears to have a relatively foundation to build on for a lot of SMP stuff. Also, working with BSDi code does obviously matter, that’s why I brought up forgetting about BSDi. If it wasn’t advantageous, they wouldn’t be using BSDi code.
Alright now I will reply to:
” Are you sure your reasoning makes any sense? You know, about SMP in Linux starting on the good base of 2.4 and in FreeBSD on the bad base of 4.x, hence SMP in Linux will always be better (that’s not what you wrote, but that’s what it means)? I suppose that if you start from nothing, you won’t get anything.”
Manik,
It makes PERFECT sense, it’s your response that makes absolutely none, maybe you skimmed over my post a bit too quickly? Look over it again, I said “may”, not WILL, nor “there’s no way around linux 2.6/3.0 having legendary SMP”, but that it _may_ be better than freebsd 5’s, and by this, I mean that it’s not unrealistic to think that FreeBSD 5’s won’t be quite as good as linux 2.6/3.0’s. But since I used the word “may”, it also means there’s a possibility that FreeBSD 5 will have the better smp.
It’s unfair to either OS to say one will definately have the upper hand in SMP and the work in this area isn’t finished yet for either OS. It will be quite interesting to see what happens, regardless of which OS has better smp, chances are, it won’t be too much of a difference and regardless of how it turns out, this will be good for everyone in one way or another.
And while I’m off topic, I’ll ask something on topic (but which is off the previous topics in my post), how is OS X’s smp? Microkernels are supposed to be made with SMP in mind, though there’s debate as to whether OS X uses a true microkernel, personally I’m on the “I don’t really care” side, since OS X is more of a reflection of itself rather than microkernels (or whatever) as a whole.
WOW, after reading all the posts(DAMN), Ihave come to some conclusions;
50% of the people dont know mch more that what am icon is or a command line( I am in this catagory for the most part)
25% only know 1 platform inside and out, so they think it is the BEST
15% know more than 1 platform and just like one and stick by its side, hedging there bet sort of speak
10% look at thing objectivly and looks at all sides of the ? at hand
Anyone else seem to notice this trend? and One last question, have we solved the LINUX OSX debate, I have they both are about the same( from an ignorant person ME)
I use multiple OS’s and want to learn LINUX, Want to change my old G3 into a LINUX box. I have to use my PC for games :-). OSX is on my ibook damn it is nice to have an easy stable nice looking OS that networks well.
RISC = 1 instruction per clock? ive never heard this definition before.. probably because its an extremely dumb definition. 286 subscaler < 1 instruction per clock on average 386 = scalar 1 insctruction per clock 486 superscalar > 1 instruction per clock.
So if i jsut ran a 486 with the instructions that take < 1 clock would it then be a RISC processor.
To be honest CISC won .. ever since the old days when a 386 was fast ppl have been telling me that RISC was faster. Its never really been true. Most code is infact not vectorisable.. esp since only now compiler technology can autovectorise. Simple things can be vectorised but not complex alg. Since PCS run the most complex alg known to man its not surprising that RISC lost. CISC won because it has generally worked out faster to have a specific complex instruction that takes 3 cycles instead of parrallisable code than takes 4 using seperate instructions. As we can put more things on a chip.. we have more space for more instructions and each instruction can be more complex.
Over time we have seen that this small performance penalty for CISC has gotten smaller.. however the usefulness of RISC processors has become less and less. Its like saying a computer system that is ment to run with 64 instructions is faster than one that is ment to run with 128 isntructions because its got more free silicon. Theres an element of truth to it.. but also an element of stupidity. For a program ment for the 64 isntructions.. will indeed run on a cpu with 128 instructions.. and the 64 instruction cpu could be optimised to be faster.
BUT THE POINT IS those other 64 istructions are USED when the program is compiled for the 128 isntruction machine.. and each of these is a HARDWARE INSTRUCTION that will of course be able to run faster than a software implementation.
As programs get more complex so does the cpu.. more isntructions.
RISC is NOT all the technolgoies that are used in todays CISC RISC.
Risc stands for….
Reduced instruction set cpu.
It dosent stand for microcode or for independent execution units.. ITs just RISC cpus had them first.
RISC means Reduced instructions.. thats all
if i said to u i could make a c compiler fast so long as i pulled out sin and tan etc and u would have to code them yourself and it would be slower than if the compiler did it.. would u use it?
Real world performance!
Glenn
Note Dosent it seem funny that the latest fastest cpus from intel and AMD have MORE instructions.. yet they are supposedly more RISC.. What a crock.. its not more RISC it uses advanced CPU technologies that first appear with RISC.
For the frikken last time RISC means Reduced Instructions and this is NOT what is or has been happening.
CISC RISC??..
I thought that RISC processor utilizes preset intructions, that you can call on when programing, basically it is like compressing things. So it does not matter how many new instructions you have, if you have reduced instructions in ( by calling on preset instructions) a CPU is a RISC?I know nothing help me where I have gone wrong here. ?
Can someone define this in stupid terms for me 🙂
Okay folks, here’s a dose of reality on the “MS uses the ‘BSD’ IP stack because it’s the best” front. I’ll say again that the conclusion isn’t supported under any circumstances. Even if MS does use Berkeley IP, that’s no proof that it’s any good. Microsoft isn’t a purveyor of quality — remember MS-BOB?
Here is what I found on an old Windows 95 laptop:
Copyright (c) 1983 The Regents of the University of California.
That copyright clearly shows that Microsoft uses Berkeley code…
…in http://FTP.EXE and only http://FTP.EXE! Make no mistake — this is no proof — it’s only a clue. Maybe there’s other Berkeley code, maybe not. Maybe it’s only the IP apps, and not the stack itself. But something from Berkeley is in Windows.
Now if I was to use Mac-head logic, this is proof positive that Windows 95 is UNIX. /* me rolls eyes */ Well it does have Cygwin…
Well done. FreeBSD still had it’s arse handed to it (despite being tuned as much as possible), largely because, as I said earlier, it’s scalability is truly dire.
It wasn’t tuned properly, they had no clue what they were doing, they just took ALL of the different advice people sent in: good, bad and contradicting options. Also, it’s only a benchmark for his own spam mailer, not for generic network applications.
I know people who actually compared FreeBSD/Linux on PRODUCTION servers running their actual applications and they say FreeBSD handles twice the load, and this is from people who prefers Linux on their desktops.
I know people who actually compared FreeBSD/Linux on PRODUCTION servers running their actual applications and they say FreeBSD handles twice the load, and this is from people who prefers Linux on their desktops.
I know people who actually compared FreeBSD/Linux on PRODUCTION servers running their actual applications and they say Linux handles sixteen times the load, and this is from people who prefers (sic) FreeBSD on thier desktops.
I mean, what is the point of this worthless anecdote?
If FreeBSD is so magically scalable, despite all the technical reasons why this isn’t the case, why are the enterprise software suppliers flocking to Linux? Why are Oracle switching their internal backend systems to Linux? Why have Amazon migrated from HPUX to Linux? Why not BSD?
Could it be that you don’t know what you’re talking about?
>RISC = 1 instruction per clock? ive never heard this definition before..
Thats not a definition as such but the early RISC designs did do everything in one cycle.
>To be honest CISC won .. ever since the old days
>when a 386 was fast ppl
>have been telling me that RISC was faster. Its
>never really been true.
Some of (if not the) earliest RISCs were the Berkely RISC 1 & 2 which implemented on a gate array could outperform the fastest CISC of it’s day, the Motorola 68000. SUN, then using 68000s in their workstations took the project and turned it into the SPARC.
>Most code is infact not vectorisable..
>esp since only now compiler technology can autovectorise. Simple things
>can be vectorised but not complex alg.
You are confusing RISC with vector processors, two different things.
>BUT THE POINT IS those other 64 istructions are
>USED when the program is compiled for the 128
>isntruction machine.. and each of these is a
>HARDWARE INSTRUCTION that will of course be able to
>run faster than a software implementation.
This is true but only to a very small degree, one of the ideas behind RISC is the 80-20 rule, 20% of the instructions did 80% of the work. Also the full instruction set was never put into hardware on CISC, they used microcode (read slow) for some stuff.
One point missing from this discussion is addressing modes. These are a very CISC feature and x86 has them by the truck load, trouble is compilers didn’t use the more complex ones that much if at all.
Most of what a processor is doing is very simple stuff, moving, adding, comparing etc. The RISC idea whats to rip out the 80% that was hardly used and implement a CPU around these simple operations, they took out the microcode and there were very few (or even just one) addressing mode. The result was smaller, faster and cheaper to make.
Intel took notice and made themselves a RISC called the i860 which came out just before the 486 and destroyed it performance wise. There was even a point where some PCs would include an additional socket for the i860.
However it was the market with 10 years of PC software decided to keep using x86.
The Workstation market was different, performance was more important so they switched: PA-RISC, MIPS, Alpha, POWER, SPARC are all RISC CPUS.
Even some desktop machines switched, Acorn with the ARM and Apple to PowerPC, Had they survived it’s likely Amiga and Atari would have done the same.
>RISC is NOT all the technolgoies that are used in
>todays CISC RISC. Risc stands for….
>Reduced instruction set cpu.
Correct but CISC took pretty much all of them except for exotic manufacturing techniques and huge die sizes….Including the RISC idea itself. The core of Athlon, Pentium 4 and even VIAs C3 are all RISC, they do not use the x86 instruction set. Look at the pipelines and there is a stage which converts x86 instructions in “micro-ops” or “macro-ops” (not sure of the exact marketing terms). x86 couldn’d keep up with the RISC chips so became RISC themselves.
Todays x86s are really just RISC processors with hardware translation/emulation of the x86 instruction set.
>Note Dosent it seem funny that the latest fastest
>cpus from intel and
>AMD have MORE instructions..
These are not additions to the basic instruction set, these are additions for the SSE and 3DNow units which are for vector processing, the i860 had one back in 1990.
>For the frikken last time RISC means Reduced
>Instructions and this is NOT
>what is or has been happening.
But thats exactly what did happen to x86 a long time ago.
However times move on, CPU designers find new methods of speeding up CPUs, and yes that includes adding additional hardware and instructions to handle it.
But they have never gone back to the massive complex CISC instruction sets with umpteen addressing modes.
NextGen, Pentium Pro and K6 were all RISCs.
CISC (Complex Instruction Set computing)
means CPUs with many instructions.
RISC (Reduced instruction Set computing)
means reducing the numbers of instructions you use to program a CPU, a bit like a language which uses less words to do the same thing – but it’s not actually compressd.
I know people who actually compared FreeBSD/Linux on PRODUCTION servers running their actual applications and they say Linux handles sixteen times the load, and this is from people who prefers (sic) FreeBSD on thier desktops.
Ah, these are the same people who claim stuff like “my grandma can handle Linux 10x better than you can handle WindowsNT”. Congrats, you suck.
We replaced our Linux2.4 web and fileservers with FreeBSD 4.4 servers, and our system is overall faster and responds quicker… UNTUNED! And when FreeBSD 5.0 is gold, we pop a 2nd CPU in all the mobos and upgrade.
” The core of Athlon, Pentium 4 and even VIAs C3 are all RISC, they do not use the x86 instruction set. Look at the pipelines and there is a stage which converts x86 instructions in “micro-ops” or “macro-ops” (not sure of the exact marketing terms). x86 couldn’d keep up with the RISC chips so became RISC themselves. ”
See here i think this is confusing RISC and what technology RISC had first.
Risc means reduced instruction set.. it used to be also because of memmory size.. more bytes for each instruction made the x86 supposedly slow ( i actually found my 386sx seemed to cain common Sun boxes of the time even though ppl tried to convince me otherwise.. I didnt believe the hype then from my direct expereinces). The next gen curosoe and the Itanium use very long instructions. Thats because the mem bandwidth isnt as much a problem as are cache misses. The longer instructions for the Itanium help the cpu organise its pipelines. It dosent matter at ALL how the Itanium dose it its using EXTRA instructions it dosent have to .. definatly not REDUCING them.
Risc was different to Cisc in that u could reduce the number of possible instructions to make it faster. The athlon HAS many instructions it dosent matter if it uses an internal core that is simlar to RISC because it still executes CISC instructions. It seems to many ppl call RISC technology, “RISC”. To execute instructions using macro ops isnt the same as RISC. Its basically how u can use CISC fast. It is based on RISC technology but it of course isnt actually RISC because it DOSENT have a reduced instruction set.
The winners atm would have to be AMD and Intel over Sun. So this proves that having a few extra instructions dosent hinder total performance significantly. CISC won.
anyone notice how linux zealots can’t stand to hear that a part of linux is inferior to something else…get over it linux zealots…FreeBSD kicks your ass in some places…hell Microsoft kicks your ass in some places…get over it…no OS is perfect for everything…not even my fav FreeBSD…and i unlike a linux zealot am willing to admit that my OS is not perfect…but i’ll be called troll because i bashed linux…well i just bashed all those OSes equally i think
“How did it win on personal computing?
As you on yourself to point out x86 are now RISC cores…”
Did you notice the quotation mark? ^_^
My point is that people are running RISC machines at home, RISC cores with some obscene design complexity stuffed in (the x86 decode unit and the giant P4 pipeline) and are bragging about the “proven CISC performance”… “CISC has won in the long run”…
MMX, 3DNow!, SSE, SSE2, all of those were to make up for the inadequate IA-32 instruction set for “modern” uses (3D, some DSP works, vector processing, things that were around for some time now and boomed into the home market thanks to 3Dfx Voodoo 1 ^_^). The first-generation MMX took over the floating point registers to do its math… This is *clearly* a hack. More hacks have been done, more compromise has been made, and now you can use your Athlon CPU to fry eggs. What’s next, bacon?
“Maybe I phrased my response/question wrong.
My point was, how does this prevent OS X from “lagging behind linux” exactly? Not a troll either, real question.
This is how it’s been working already right? OS X is already (sorta) sync’ed up with an FBSD release. Did I misunderstand what Eugenia meant by lagging behind linux?”
It’s because Eugenia linked some of the most irresponsible piece of technical “literature” *ever* on OSNews and counted it as true. She even put that outrageous “our take” in there. This is called “silogism”, when you use broken logic to prove a broken point. Mac OS X based its userland on FreeBSD’s, took most of its command-line tools, and that’s it. OS X’s BSD libraries are instead based on NetBSD. Encryption libraries/tools were taken from OpenBSD. The kernel is *not* stock Mach, it has been modified to include other code into the kernel land for performance (and other technical issues) sake, and, even then, Mach itself is *very* speedy when its native IPC routines are used instead of ports, which were primarily made to support the Berkeley UNIX emulation layer.
This link has been pointed in the LinuxJournal where that horrid, flawed, pathetic article was posted:
http://developer.apple.com/techpubs/macosx/Darwin/General/KernelPro…
Read the third paragraph; it’s the same approach taken by BeOS: a modular monolithic kernel.
Also read
http://developer.apple.com/techpubs/macosx/Darwin/General/KernelPro…
which describes kernel extensions (kernel add-ons, using BeAPI words)
Except for its age, there’s no other reason for Mac OS X to “lag behind” Linux. Try Aqua-less Darwin (and don’t put X on top of it! Stress it’s *kernel*, that’s the idea) instead of Mac OS X, compare it to Linux, and see the results by yourself. Try Jaguar on AGP2x Macs. Try next-generation OS X when it’s ready.
Linux is now 10 years old and has had much more worldwide collaboration than NEXTSTEP/Mac OS X. Give the new guys a break, they’re doing an amazing job. They really are. Even though I’m aware the average Joe is more comfortable with “classic” Mac OS interface, I think it’s worth the change.
And, compared to Solaris on SPARC, Linux is lagging behind… Oh well, time for another flamefest?
“See here i think this is confusing RISC and what technology RISC had first.”
This is my final word on this subject. I’m tired.
RISC means stripping redundancy and unnecessary complexity, and *not* crippling your instruction set. Not at all. The first noticeable side-effect is that in first-generation RISC the instruction set was much smaller than corresponding CISC ISA, but this is not what defines RISC.
Go chech UltraSPARC and Cray specs sometime. They are easily found.
“I know people who actually compared FreeBSD/Linux on PRODUCTION servers running their actual applications and they say Linux handles sixteen times the load, and this is from people who prefers (sic) FreeBSD on thier desktops.”
Whoever told you this is full of BS. Yes, it really is that simple. Sixteen times? LOL Obvosiusly, one of those people who doesn’t have a clue about how to tune FreeBSD properly.
The main performance bottleneck for FreeBSD out of the box is the filesystem. It doesn’t have soft updates enabled by default and it uses sychronous file system writes. This is slow, but very safe.
ext2fs on the other hand, uses asychronous file system writes. This is fast, but very dangerous, especially in transaction heavy environments. Of course, the newer journaling file systems solve this problem.
“Why are Oracle switching their internal backend systems to Linux? Why have Amazon migrated from HPUX to Linux? Why not BSD?”
Oracle is not switching their internal backend to Linux. If you think they are, I would love to see your source for this. The Linux kernel sucks for running Oracle. In fact, it sucks so bad that Oracle had to modify the kernel to get anything like half way respectable performance. Have you ever actually worked with Oracle on Linux? It’s not much fun. The installer sucks, etc. Oracle for Linux is no where near on the same level as Oracle for Solaris.
Also, I know some people running Oracle for Linux on FreeBSD. And I have heard some say that it actually performs better on FreeBSD than it does on Linux itself.
BTW, Amazon probably migrated because they had to cut costs. And I could be wrong, but I think Amazon ran Digital UNIX, not HPUX.
BTW, Amazon’s average uptime since they switched to Linux is only 84 days. So much for Linux stability…
Oracle by the way, is running on Solaris. Always has been. If Oracle was actually migrating to Linux they would migrate their web front end long before they migrated their DB backends.
I think the risc cisc fighters should read this again:
http://www.arstechnica.com/cpu/4q99/risc-cisc/rvc-1.html
Simple clear and too the point starting from the first line:
“The majority of today’s processors can’t rightfully be called completely RISC or completely CISC.”
In Fact most of these arguments can be solved by reading the different Blackpapers in this section:
http://arstechnica.com/cpu/index.html
The FreeBSD vs Linux vs OSx fight should just stop, because:
1) There is no point in a fight over which is really better, using no real data to back up the claims.
2) any software has strenghts and weaknesses and the day there will be one perfect all over piece of software, it wont be designed by humans, but by somekind of Neural network AI.
3) What the hell are you guys doing we all have a common enemy and they are using these divisions between our ranks to prevent us from beating them.
Don’t you think that if we stop bikering over stupid sh*t and focuse on beating “the Beast” we could actually get somewhere?
You know like when the humans, the hobbits, the Dwarfs, the Hawks, the Elfs, the wolves, etc against the Orcs.
shout outs:
Meianoite you the man.
Simba you are definitly doing better than the last discution about PPC/X86 (about Xserve)
although do use more data to back up your claims, because your arguments always fall on this.
This so far has been the most intelligent discution I’ve read on this site; apart from a few Clueless gnomes.
Did you guys notice.
Speed is either on Coke or is Manic.
but mainly, mainly, he doesn’t know what he is talking about; and keeps using sad inductive reasoning for all of his arguments.
“Did you guys notice.
Speed is either on Coke or is Manic.”
Seabass,
Speed is pretty much just trolling. Like I said, in another thread, he is trying to tell me that I didn’t give him any evidence that Java is being widely used (he’s trying to claim no one is using Java). His claim that I have no evidence that Java is being widely used comes after I presented him with a survey from Evans Data Corp that says over 50% of developers in the US are using Java, another survey that said 78% of IT execs polled said J2EE does or will play an important role in their network application strategy, and a list of 49 major companies using Java in major ways–many of these companies being Fortune 500 companies.