“Apple’s quaint choice of a microkernel for Mac OS X means Linux will lead in performance on Mac hardware.[…] BeOS implements its TCP stack in a user process, microkernel-style, and so does QNX. Both have notoriously slow TCP stacks.” Read the article at LinuxJournal. Our Take: Oh, yeah, this is why Be rewrote the whole networking stack with many of its parts living in the kernel space and named the project “BONE”. As for MacOSX lagging behind Linux, we should not be forgetting that Apple announced that MacOSX will sync with FreeBSD 4.x for MacOSX 10.2 while it will also sync with the next generation FreeBSD, 5.x, next year. Technically-speaking, FreeBSD 5.x is one of the most advanced operating systems one can find today (or tomorrow :).
“Did you guys notice.
Speed is either on Coke or is Manic.”
maybe he’s on speed…
Sorry, couldn’t resist that 🙂
Last time I make a post here as I don’t want to get drawn into this anymore.
bytes256,
“anyone notice how linux zealots can’t stand to hear that a part of linux is inferior to something else…get over it linux zealots…FreeBSD kicks your ass in some places…hell Microsoft kicks your ass in some places…get over it…no OS is perfect for everything…not even my fav FreeBSD…and i unlike a linux zealot am willing to admit that my OS is not perfect…but i’ll be called troll because i bashed linux…well i just bashed all those OSes equally i think”
Uhm, you mean how the FreeBSD, the Microsoft, and Mac Zealouts don’t do the exact same thing and like how the Mac and FreeBSD zealouts haven’t posted the same kind of stuff for this story? Oh wait..
meianoite,
I think you misunderstood me a bit, but that’s fine because you answered my question anyway. Thanks.
>Risc means reduced instruction set.. it used to be
>also because of memmory size.. more bytes for
>each instruction made the x86 supposedly slow
It’s the other way round, RISC programs are typically
30% bigger.
VLIW (Itanium) are 30%+ bigger again.
CISC does have some advantages, one being the shorter instructions.
However some ARM processors support a “Thumb” instruction set which is a 16bit version of the ARMs normal 32 bit instructions.
>The next gen curosoe and the Itanium use very
>long instructions.
The current gen use long instructions, thats what VLIW is.
They pack 3 or 4 instructions into a 128 bit bundle,
There’s none of the complex addressing modes of
the x86 though, VLIW is designed to be
much simpler even than RISC.
>The winners atm would have to be AMD and Intel
>over Sun. So this proves that having a few extra
>instructions dosent hinder total performance
>significantly. CISC won.
If CISC is so blazingly fast why did the workstation
vendors not switch to x86 or 68K?
Not all RISCs were ever equal, Sun were never fast.
Although if you tested floating point you’d find a big
difference as even Sun beat x86 there.
On the other hand Alpha usually did a good job of
toasting everyone, and then when they’d all finally
caught up there’d be a new Alpha which toasted
them again. The Alpha was always the simplest of
the RISCs, they kept it so so they could keep
cranking the clock speed right up.
While the 486 was still at 50MHz or below the Alpha
was tearing away at 200MHz.
Expensive though…
I have an old 21164 I got for $10 with motherboard.
Originally that CPU alone was $3,000
and people think Macs are expensive…
—
>RISC means stripping redundancy and unnecessary
>complexity, and *not* crippling your instruction set.
>Not at all. The first noticeable side-effect is that in
>first-generation RISC the instruction set was much smaller
>than corresponding CISC ISA, but this is not what defines RISC.
Thank you, that pretty much sums it up, yes.
—
“The majority of today’s processors can’t rightfully be called completely RISC or completely CISC.”
In terms of complexity of the hardware that is very true however it is not so whan applied to the instruction set.
Instructions have generally been added to allow for vector type operations, just as x86 has with 3DNow and SSE. Typically the RISC vendors did it in one go rather than having MMX, SSE, SSE2 etc..
“The main performance bottleneck for FreeBSD out of the box is the filesystem. It doesn’t have soft updates enabled by default and it uses sychronous file system writes. This is slow, but very safe.”
Incorrect. The default mount option is noasync which has metadata done synchronously while I/O is done asynchronously. However, you are correct about softupdates. They aren’t enabled by default. You are also correct about Freebsd being highly conservative in its settings “out of the box”.
Read your man pages people.
Finally, a Eugenia I can agree with ;-).
Network performance? Ain’t seen nothing yet. Just wait until FreeBSD and OS X adopts NetBSD’s “zero-copy” patch (http://mail-index.netbsd.org/current-users/2002/05/02/0016.html).
Whoever told you this is full of BS. Yes, it really is that simple. Sixteen times? LOL Obvosiusly, one of those people who doesn’t have a clue about how to tune FreeBSD properly.
Congratulations, you missed the sarcasm. I was pointing out how utterly meaningless this kind of anecdotal “evidence” is; anyone can claim anything.
Oracle is not switching their internal backend to Linux. If you think they are, I would love to see your source for this. The Linux kernel sucks for running Oracle. In fact, it sucks so bad that Oracle had to modify the kernel to get anything like half way respectable performance. Have you ever actually worked with Oracle on Linux? It’s not much fun. The installer sucks, etc. Oracle for Linux is no where near on the same level as Oracle for Solaris.
Yes, I work with Oracle on Linux frequently. It is true that Oracle engineers collaborated with kernel hackers – to extract the best possible performance from the system. This included, amongst other things, tuning the elevator algorithms and improving the rawio interfaces.
As for your shrill “evidence for the Oracle migration!” cries:
http://www.computerworld.com/hardwaretopics/hardware/server/story/0…
http://www.infoworld.com/articles/hn/xml/02/01/31/020131hnlarrye.xm…
http://www.itworld.com/Comp/2384/020131ellisonlinux/
http://www.nwfusion.com/news/2002/0131ellison.html
From Larry Ellison himself.
Oh dear. Seems you’re utterly, irevocably wrong. Again.
I’m still waiting for some links to Microsofts’ BSD TCP stack acknowledgement.
And I have heard some say that it actually performs better on FreeBSD than it does on Linux itself.
And I’ve heard some say that Elvis is still alive. Nobody cares about your anecdotes.
BTW, Amazon probably migrated because they had to cut costs. And I could be wrong, but I think Amazon ran Digital UNIX, not HPUX.
You miss the point (again) – why did they migrate to Linux and not FreeBSD?
Network performance? Ain’t seen nothing yet. Just wait until FreeBSD and OS X adopts NetBSD’s “zero-copy” patch
*yawn* Something else Linux has had for ages (since 2.4.3 or so).
“Congratulations, you missed the sarcasm. I was pointing out how utterly meaningless this kind of anecdotal “evidence” is; anyone can claim anything. ”
The problem is that Linux zealots often do make highly exagerated claims about performance like this and they are NOT being sarcastic. I missed the sarcasm because I have too much experience with Linux zealots.
Ok. I was wrong about the Oracle thing. But at the same time, it looks suspiciously like politics played a big role in this decision. As did economics.
As far as why Amazon chose Linux instead of FreeBSD, because Linux is better supported. No one denies that. There are more applications available for Linux. They didn’t necessarily chose it because it was a better server platform. If everyone always chose platforms based on what was the best technically, we wouldn’t any Windows 2000/XP internet servers.
Linux is more attractive to businesses because of companies like Red Hat. Basically, if I set up a Red Hat server, and it break, I have someone’s neck to ring on the other end of a phone line when I call up Red Hat. With FreeBSD, I don’t get that.
But at the same time, it looks suspiciously like politics played a big role in this decision. As did economics.
This is the backbone of Oracles business; politics and to a certain extent economics don’t come into it. Linux is a highly capable Oracle DB platform, to such an extent that Oracle are willing to bet the functioning of their business on it. So your drivel about Linux is, again, completely misplaced.
“This is the backbone of Oracles business; politics and to a certain extent economics don’t come into it. Linux is a highly capable Oracle DB platform, to such an extent that Oracle are willing to bet the functioning of their business on it. So your drivel about Linux is, again, completely misplaced.”
Then how come so few fortune 500 companies are running Oracle on Linux? Linux still doesn’t scale well. That is why we still have Solaris and AIX for some time into the future.
Google is a good example of how poorly x86 and Linux scale. There are over 9,000 boxes running Google. Can you say maintenance nightmare? Sure you can.
Intel clusters may eventually replace big systems. But it isn’t going to happen until Intel’s 64 bit server are in widescale use. The current x86 processors just don’t scale well enough to handle serious loads.
Then how come so few fortune 500 companies are running Oracle on Linux?
Because Linux hasn’t been suitable for this for that long (a year or so) – it takes time for a new solution to be tested and accepted. You’re going to have to admit that Oracle switching to Linux for their backend systems is a pretty strong endorsement.
Linux still doesn’t scale well.
Arguable, but it certainly scales better than FreeBSD which is what this argument started with.
Google is a good example of how poorly x86 and Linux scale. There are over 9,000 boxes running Google. Can you say maintenance nightmare? Sure you can.
Do you have any idea about the technology and requirements behind Googles infrastructure? I’m guessing a big, fat no, because if you did you’d realise why a massive cluster of cheap commodity machines is exactly the right solution for them. Google is absolutely not an example of what you’re claiming, and it’s ridiculous to pretend it is.
Also, if you knew anything about Googles infrastructure, you wouldn’t mention “maintenance nightmare” with a straight face *snicker*
FWIW, with the O(1) scheduler (which is shipping and stable now, remember) Linux has been shown to scale fairly linearly to beyond 32 processors (on a Power4 box).
A cluster of 9,000 boxes is NOT the right solution for anyone. A year or so ago they had 7,000. Do you not see the problem here? As their traffic grows, they keep having to add literally thousands of new boxes. How long can you continue that?
“Also, if you knew anything about Googles infrastructure, you wouldn’t mention “maintenance nightmare” with a straight face *snicker*”
Do you know anything about Google’s infastructure? Have you ever actually seen the server cage at Google’s HQ?
“FWIW, with the O(1) scheduler (which is shipping and stable now, remember) Linux has been shown to scale fairly linearly to beyond 32 processors (on a Power4 box).”
I’m not saying all of it is Linux’s fault. A lot of it (and maybe even the majority) has to do with x86. It wasn’t designed to scale like that and it doesn’t do it very well. Intel’s 64 bit processors may change that. But until they do, x86 CANNOT replace Sparc and RS/6000.
Linux will play a big role in the future. Even Sun finally ackowledged that (much to Red Hat’s dismay probably), when they announced they will be creating their own Linux distribution.
By the way… I’m NOT anti-Linux. The reason Amazon switched to Linux instead of FreeBSD is the same reason I will probably be migrating my servers to Linux and off of FreeBSD, even though I think FreeBSD is technically superior in many ways.
In my case, it has to do with Java. I need JSP and servlets. And I can’t keep waiting while the FreeBSD Java project says “it’s coming” and continues to keep on delaying it longer. As of right now, the most recent native Java port for FreeBSD is 1.1, which is obsolete.
So it will be application support. Probably just like it is in most cases, that causes me to switch to Linux.
“It’s the other way round, RISC programs are typically
30% bigger. VLIW (Itanium) are 30%+ bigger again”
Im talking about the instruction length not the number of instructions. One of the stated advantages of RISC at the time was that all instructions 1 byte were as x86 had some 2 byte instructions.. it WAS NOT just the extra decoding time but in fact the less space and bandwidth limitations of bringing the CODE into the cpu.
Itanium programs are larger because they instructions are LONGER, NOT because there has to be more instructions because they are simpler. Longer instructions lengths are very NON RISC. By let me guess all u RISC won ppl will some how try spin it like its VLIW instructions are somehow part of the whole REDUCE the NUMBER of instructions and make them SMALLER idea.
One example of how using less more complex instructions can help is the rotate instructions.(see distributed.net) The Sun dosent have it, so it is left FAR behind, the x86 has it in normal instructions but not SSE, the g4 has it in altivec. This instruction is hardly ever used and is certainly one of the ones likely to be REMOVED in a RISC design. And the performance speaks for itself.. Distributed.net shows RC5 cracking the x86 about 5mill the g 4 12 million and the sun 1 million.
Why did this happen? because rotate is hardly ever used .. Sun decided it wasnt worth the silicon and didnt add it because u can do rotate with other instructions. However JUST as i pointed out CPUs run many different tasks and use pretty much all the instructions they have avalaible.. so in certain cases u do want the hardware instruction and it adds to performance tremendously.
Hardware ROCKS over Software..
For every extra instruction u have u have the potential for a X times increase in speed over that task.
“If CISC is so blazingly fast why did the workstation
vendors not switch to x86 or 68K? ”
PPl still use Sun now.. i rest my case.
Alpha had a RISC set.. many of its developers moved to AMD and developed the Athlon (one of the first ever pipelined fpus i believe). The athlon easily beats alpha.. yet its very similar technology. This shows there was little point Reducing the instruction set because it wasnt the limiting factor otherwise it would have been imposible for an Athlon to beat an alpha.
“There’s none of the complex addressing modes of
the x86 though, VLIW is designed to be
much simpler even than RISC”
RISC dosent mean non x86 addressing mode.. RISC means to Reduce the number of Instructions. I used to be told over and over again there were too many x86 instructions for the x86 to ever be the top performer. I was so obviously right, in that reducing the number of instructions was NOT required, it never made that much difference and it is making less and less.
“RISC means stripping redundancy and unnecessary
>complexity, and *not* crippling your instruction set.”
Would u say the Sun not having rotate “crippels” it.. i would.
RISC is just hype because ppl were getting worried x86 could one day beat sun.
The fastest pcs at the moment still run Complex (and getting more so) Instruction Sets. Note: AMD 3dnow has a hardware sqrt instruction.
CISC won.
im pretty much of the mind that cisc vs risc doesnt really mean anything anymore. pretty much each camp takes ideas from the other camp to make their own cpu faster. OOO, branch prediction, simd, etc……
also great headway is being made into runtime code optimization
didnt ibm make an emulator for one of their server chips that would make software run 25% faster than native just because it would do runtime code optimization on the binaries? (this is running a JIT environment on the platform that the JIT environment is emulating)
its like the daisy or tulip or some flower name project. im not sure that would help out standard x86, but i wonder what a project like that could do for a modern ppc cpu such as the g4?
OK people. I have read through many of the posts (useles flames) and was just wondering. If YOU were to design a system BASED on the properties of other systems. What would your system be?
For Example:
BSD networking
MS Windows Interface
SoAndSo’s security
etc…
Is it based on monolithic, microkernel, hybrid, exo? What architecture does it run on? Anything that makes that system unique? For what kind of users is it intended?
Every user has different needs. From high traffic web servers to my grandma’s Dell.
I really don’t intend to start another flame war (leave that to the trolls) but I would like your responses.
Keep it real.
“In my case, it has to do with Java. I need JSP and servlets. And I can’t keep waiting while the FreeBSD Java project says “it’s coming” and continues to keep on delaying it longer. As of right now, the most recent native Java port for FreeBSD is 1.1, which is obsolete.”
Actually, the linux Java SDK runs fine on freebsd. I’ve used it up to version 1.4 with zero problems so far.
Speed is either on Coke or is Manic.
Winners can speak to my words. Sometimes they prove me wrong, sometimes not. But winning an argument is not the point — the winning comes from participating with honor.
Losers on the other hand can do nothing but call me names.
Here’s to the winners!
OK people. I have read through many of the posts (useles flames) and was just wondering. If YOU were to design a system BASED on the properties of other systems. What would your system be?
Cool game! Here I go…
Kernel: Balance sheer performance with availability of hardware drivers — Linux wins by a nose.
Networking: Cisco
Filesystem: ReiserFS, for many subtle reasons.
Security: MULTICS
User Interface: Vocal & gesture recognition with talk-back (a la Enterprise & Hal), and mental telepathy for graphics (to ease the eyestrain).
Hardware: The fastest that I can afford!
Steganography: All of my passwords and credit card numbers would be hidden in a pile of dog poo out in the back yard. Nobody would think of looking there!
Networking: BSD
Interface: BeOS or Mac
Security: OpenBSD
FileSystem: BeOS (or something similar)
Hardware: Alpha or Power
Minesweeper: Had to get something from Windows…
We can’t have all of this but replacing the interface and file system you can actually build an Alpha based OpenBSD machine.
Fast, reliable, secure – costs a ton of cash though.
—
>Steganography: All of my passwords and credit card
>numbers would be hidden in a pile of dog poo out
>in the back yard. Nobody would think of looking there!
LOL! but you told everyone where you would keep them!
All your passwords are belong to…damn where’s that smell coming from?
“Actually, the linux Java SDK runs fine on freebsd. I’ve used it up to version 1.4 with zero problems so far.”
Performance problems mostly–especially with Swing applications. The Linux Java SDK on FreeBSD is rather slow.
Uhhh… I have a native FreeBSD 1.3 jdk on my FreeBSD system. Installed from /usr/ports/java/jdk1.3
“Uhhh… I have a native FreeBSD 1.3 jdk on my FreeBSD system. Installed from /usr/ports/java/jdk1.3”
You sure it’s native? There is native version of JDK 1.3 listed in the FreeBSD ports tree. The version of JDK 1.3 that is listed in the ports tree has “linux base-6.1” as a requitement. It’s not native. It’s the Linux version.
Also, if that port actually is finished, someone serious dropped the ball on updating the FreeBSD web site.
The latest native version I see listed is 1.1.8.
“There is native version of JDK 1.3 listed in the FreeBSD ports tree.”
Oops. That should have said “There is NO native version of JDK 1.3…”
Ah… I see what they did. That is actually a patch set that allows you to build a native version from the Linux version.
but at this point in time, it doesn’t help me. If I am reading the port description correctly, it can only be used for personal use at this time.
Also, I need some to use some of the features in 1.4.
“””You sure it’s native? There is native version of JDK 1.3 listed in the FreeBSD ports tree. The version of JDK 1.3 that is listed in the ports tree has “linux base-6.1″ as a requitement. It’s not native. It’s the Linux version.”””
If you read the pkg-descr you’ll see that it is native version (built from/patched against Sun’s sources), it still requires the linux version for some reason (probably for checking output). Only feature I’ve noticed it lacking is JIT.
What they are speaking to Sun about is certification to distribute prebuilt binaries and validate it as a Java2-compatible implementation.
LOL! but you told everyone where you would keep them!
I didn’t say which pile! I also have a trigger-happy neighbor who’s retired and can spend all day looking out for an intruder with a stick and some baggies… It’s all built into MULTICS somewhere
All your passwords are belong to…damn where’s that smell coming from?
LOL! Now that’s an audit trail!