“The NetBSD Foundation announces that it has hired Andrew Doran to work full-time on improving symmetrical multi-processing in NetBSD. This work is made possible through a generous donation by Force10 Networks and internal funding by The NetBSD Foundation. Andrew Doran is an independent, Dublin based Unix systems consultant with special interest in building scalable systems. He has been a NetBSD developer since 1999 and is currently working on the transition from a big-lock SMP implementation to a fine-grained model, which allows multiple CPUs to execute code in kernel context simultaneously. Hiring Andrew full-time will boost work in this area, with the final result of a SMP implementation that is ready for tomorrow’s multi-core-CPUs.”
“Tomorrow’s multi-core CPUs”?
They’re already here…have been here for months.
O…k….
Not sure job security was top priority, since even NetBSD founders claim the project is headed for irrelevance and obscurity.
“Tomorrow’s multi-core CPUs”?
They’re already here…have been here for months.
Maybe they meant that the current SMP implementation (which came out in 2004 if I recall) won’t scale well when an average machine has, say, 16 cores, as in “Tomorrow’s multi-core CPUs”.
Or maybe it was just marketing fluff. Who knows.
Years, if you consider SMP servers. Multicore doesn’t change anything in that field like it does in desktops.
By the way, one person can’t make a operative system scale well to multicore systems. It took Linux many years, many developers, and many subsystem maintainers rewritting their subsystems to make it work.
Edited 2007-07-25 14:57
Years, if you consider SMP servers. Multicore doesn’t change anything in that field like it does in desktops.
I disagree, to some degree. On the x86 side in, say, 2003, a 4-core, 4-socket box like a Compaq DL580 was a somewhat exotic beast that set you back $30k-$50k. The last 8-socket x86 box I’ve touched was a Compaq with 700 MHz P3 Xeons (running NT4 ), and it cost about $70k.
Today, $4k buys an 8-core server and everyone is buying one. This means that SMP scalability is much more of a pressing issue in the commodity server space today than it was a few years back.
Now, way back when, of course you saw some much larger RISC boxes. But none of them were running a commodity OS.
“Years, if you consider SMP servers. Multicore doesn’t change anything in that field like it does in desktops. “
My main personal machine is a dual processor system (relatively low-end P3 system) from 1999, that has had BeOS run happily on it that entire time.
Why did I buy that thing? Why dual processor back then? Because the reality is that it cost me less to buy the hardware and processors to run dual p3-450’s than it did to even upgrade a single processor to p3-500, and it has served me quite well for responsiveness under BeOS, and even quite well under XP and Win2K3, largely because it had that other processor.
In reality, there’s a sweet spot for manufacturing/running costs versus performance, and depending on what you need, that may be a dual socket system, or it may be more, and that’s been true in the server realm for a long time.
NetBSD is apparently long overdue to get this work done on it, and I applaud that someone is hiring someone full-time to make it happen: it greatly increases the value proposition of using NetBSD on cheaper multicore/multisocket beasts.
It will take time, but a common misconception is that we are starting essentially from scratch as was the case for FreeBSD’s SMPng. SMP is something that the NetBSD developers have been working on for years now. The focus of this effort is to bring that work to fruition.
— Andrew
Edited 2007-07-26 11:20
It will take time, but a common misconception is that we are starting essentially from scratch as was the case for FreeBSD’s SMPng. SMP is something that the NetBSD developers have been working on for years now. The focus of this effort is to bring that work to fruition.
and by ‘bring to fruition’, Andrew means “rip everything out and start over”. Going with a new thread model, a new synchronization model, and a new interrupt service model.
netbsd will be around and developed for soem time. there are lots and lots of cmercial companies that use it due to its scalibility. i wouldnt worry about its long term existance. and either way any effort going into and of teh bds’s will end up benefiting all the bsd’s if the others choose to adopt it.
“netbsd will be around and developed for soem time. there are lots and lots of cmercial companies that use it due to its scalibility. i wouldnt worry about its long term existance. and either way any effort going into and of teh bds’s will end up benefiting all the bsd’s if the others choose to adopt it.”
But does it have a spell checker?
haha sorry wireless keyboard was running out of batteries
about 2002-3 some fairly wide ranging tests were done comparing linux, freebsd, openbsd and netbsd. testing included the scalability of memory allocations, socket opening and so on:
http://bulk.fefe.de/scalability/
that study is now out of date, all the systems involved have moved on since then.
i haven’t since seen any similar study since then. does anyone know of any? evidence certainly adds more weight to arguments – some of which are holding onto myths or out-of-date facts. including opensolaris in a new study would also be useful.
Yes it’d be nice to see some new benchmarks. I don’t believe anyone has done anything more recent.
I believe SMP is somewhat weak on NetBSD & OpenBSD and OK on FreeBSD 6.2 but I can’t verify this to be true without seeing benchmarks.
FreeBSD 7 will give a *real* boost to SMP performance & turn things around.
A couple of benchmark results provided by FreeBSD:
http://obsecurity.dyndns.org/bind-resperf.png
http://people.freebsd.org/~jeff/sysbench.png
http://people.freebsd.org/~jeff/mysqlwrite.png
http://people.freebsd.org/~jeff/scaling.png
The performance difference in SMP bind from 6.2 to 7 seems VERY IMPRESSIVE.
Another short article talking about SMP on FreeBSD:
http://www.internetnews.com/dev-news/article.php/3561526
I’m hoping someone will provide independent SMP performance benchmarks once FreeBSD 7 is released. And compare it to NetBSD, OpenBSD, DragonFly & Linux.
Though DragonFly is not one of the popular choices I’d like to see how it performs because it is handling SMP differently. ( more out of interest ).
>and OK on FreeBSD 6.2 but I can’t verify this to be true without seeing benchmarks.
Linux scheduler has got some peaks, but FreeBSD 6.x scheduler, 4BSD, has got reliability under heavy load. With the advent of the new CFS scheduler in Linux (2.6.23), Linux says good bye to “peak-nonsense” and says hello to reliability instead of benchmark mumbo jumbo to impress media. So CFS is in fact almost equal now to 4BSD.
http://jeffr-tech.livejournal.com/10103.html
For those that may get confused or lost by the post in that link, just go back one post to get the link to the graph.
FWIW, the apparent poor Linux performance is due to a bug in the glibc malloc implementation. Should be patched in 2.5, or you can use the google malloc library.
http://ozlabs.org/~anton/linux/sysbench/
Just to let you know that the issue still remains. The new CFS sched really helps, though it doesn’t have as great of a peak but is more reliable under load.
Check this out, even with the “fixed” glibc malloc and even with CFS.
http://people.freebsd.org/~jeff/sysbench.png
DragonflyBSD apparently doesn’t do so well at scaling
at the moment.
http://obsecurity.dyndns.org/dfly.png
http://leaf.dragonflybsd.org/mailarchive/users/2007-05/msg00134.htm…
DragonFly would be sure to disappoint at the moment as large chunks of it still require and run under the MP lock, including the threading code.
My best guess ATM is that its going to be another year before we start to see the MP lock removed from enough of the kernel to see how well the DF model scales.
I’m optimistic that it should do well once the MP lock is largely gone, but that’s off in the future.
The fefe benchmarks tested algorithmic scalability.
Algorithmic scalability is the easy part. It usually involves code that is localized to one particular subsystem, and the changes made does not have system-wide ramifications. It’s just doing the same old thing more efficiently.
An excellent illustration of this fact is that when the fefe benchmarks was first published, NetBSD didn’t do too well. They rectified that in a matter of weeks.
Parallel scalability is what NetBSD is currently struggling with, and it’s a much harder problem. Just look at how long it has taken FreeBSD to get anywhere on that front, and they have had a lot more people contributing to it than NetBSD.
Edited 2007-07-26 08:39
about 2002-3 some fairly wide ranging tests were done comparing linux, freebsd, openbsd and netbsd. testing included the scalability of memory allocations, socket opening and so on:
http://bulk.fefe.de/scalability/
that study is now out of date, all the systems involved have moved on since then.
This same guy did another comparison in 2006, though it was about filesystems. Interesting read, both the PDF and the diary:
http://bulk.fefe.de/lk2006/
A couple of years ago I did a test comparing various OS installations with MySQL (NetBSD, FreeBSD, Linux, Solaris, OpenBSD) and NetBSD did well on single core, but did terribly on multicore. NetBSD multicore threading was actually broken, so only multi-process applications could use more than one CPU, multi-threaded were still stuck on one. It’s good to see that they’re addressing this.
It’s good to see that they’re addressing this.
Actually, Andrew already addressed that a few months ago. What is at stake here is the breaking of the kernel big lock, which is a copletely different (and unrealted) issue.
While having a big lock is not too bad on dual CPU, as soon as the number of CPUs in the system increases the contention on the big lock is bigger, and that degrades the performance of the whole system.
Quentin Garnier.
Pity that. To date Doran’s work on NetBSD has shown an amazing lack of depth.
He is, basically, optimizing PC-SMP at the expense of every other architecture, and he’s doing it in ways that are well understood not to scale.
NetBSD’s claim to fame was its diversity, due to the wide number of architectures it supports. Doran will torpedo that while at the same time bringing no more than a ‘me-too’ level of smp performance to the architecture.
NetBSD would have been better off using Matt Dillon’s SMP work and spending their effort on maintaining diversity, concentrating on the active embedded architectures.
NetBSD has taken another step in the direction of obscurity and irrelevance.
Any facts or just fishing for “compliments”?
Plenty of facts:
1) Dillon’s SMP work is already in use and performs well
2) Doran’s proposals wrt device driver redesign have been opposed by people responsible for other architectures because they are PC centric
Read tech-kern, it’s all there.
>Read tech-kern, it’s all there.
Ahem, you know there are *many* mailing list to read..
Too bad, apparently there isn’t the equivalent of kerneltrap or lwn which does summary of the interesting threads..
Maybe the lack of depth so far has been due to a lack of resources – with the funding of Doran it will allow him/her to actually have the resources to test ideas on multiple CPU architectures and address the coverts that might appear on each architecture.
As for the ‘view’ by some that it isn’t relevant to NetBSD’s core goal – incorrect. Embedded devices are becoming 64bit (VIA’s next CPU will be 64bit which is aimed at low powered devices) and multi-core (Motorola has future plans for low powered multi-core processors).
Also, the line between a ‘computer’ and embedded device is blurring; laptops are increasing, desktops are declining, the same energy efficiency thats required in embedded devices are now required in laptops to make them truly portable. To say that a certain operating system or CPU is ’embedded only’ is short sighted at best given the scenario outlined.
Maybe the lack of depth so far has been due to a lack of resources – with the funding of Doran it will allow him/her to actually have the resources to test ideas on multiple CPU architectures and address the coverts that might appear on each architecture.
Nope. He simply doesn’t care about other architectures and his lack of familiarity is painfully obvious from the approaches he’s taking and supporting.
As for the ‘view’ by some that it isn’t relevant to NetBSD’s core goal – incorrect. Embedded devices are becoming 64bit (VIA’s next CPU will be 64bit which is aimed at low powered devices) and multi-core (Motorola has future plans for low powered multi-core processors).
It’s not that it’s not relevant. It’s that it’s the antithesis. Concentrating on a single architecture is absolutely the opposite of where NetBSD had previously positioned itself.
Anyway, by unit count, the overwhelming majority of embedded devices likely to run a BSD derivative in the next few years will run on ARM, and Doran’s approach is terrible for the direction ARM is taking with, for instance, Cortex.
As far as ‘multicore’, cellphones are routinely multicore now, but they run assymetric now, and are likely to do so for another five or more years.
I honestly believe that this chap’s comments are just a case of sour grapes.
With the SMP work we are doing, we have been careful to not negatively impact performance across the architectures that NetBSD supports. A lot of time and hard work has gone into profiling and design — with the goal being no regression.
We’ve seen the hit that the FreeBSD folks endured with SMPng and that’s something the NetBSD Project is keen not to repeat. We have learned some good lessons from that.
Systems like the PC and sparc64 do tend to get more attention when it comes to optimization, but that’s simply because they are popular and widely available. You won’t find someone sitting up into the wee hours of the morning squeezing that last bit of performance out of the vax, because it just doesn’t matter any more.
I have been watching Matt Dillon’s work in Dragonfly with interest, but the model is largely unproven (and as far as I am aware, hasn’t been delivered on yet). Dragonfly has made great improvements in a number of areas though, and we’re always on the lookout for good ideas and ways to improve the system.
— Andrew
With the SMP work we are doing, we have been careful to not negatively impact performance across the architectures that NetBSD supports. A lot of time and hard work has gone into profiling and design — with the goal being no regression.
I’ve seen this claim made in tech-kern, but the reality is that you’ve done no profiling on any of the ARM family, unless that’s changed recently, on any of the other embedded processors.
Further, you’ve made changes that reduce performance on that architecture and you’ve made changes that enhance MP performance at the expense of UP.
The VAX doesn’t matter any more, but the future is embedded and you will find people up until the wee hours of the morning optimizing for those architectures, most notably MIPS, PowerPC, and ARM.
Actually, your comment about the VAX pretty much betrays how shallow your understanding of architecture issues is. You seem mired in the ‘past versus pc’ view of NetBSD and completely unaware of embedded computing.
The proposal to reduce SPL granularity is another good example of not understanding the breadth or depth of other architectures, as is the whole approach being recommended for dealing with interrupt handlers.
As a big fan of BSDs, I think it is great to hear the inroads FreeBSD 7 has in terms of scalability. And I just want to remind everyone that this is not a my OS is better than your pissing contest. Most BSDers do not care about the Linux or Solaris kernel. They like their BSD, they like the CLI elegance and simplicity BSD always strives for, and they like the fact that there is an OS/kernel that any company can take and really compete in the market place. BSDers don’t care about how much better they are than Linux, etc. They simply want to make their kernel better because they want it to perform and scale better. Certain people hate the BSD license because they don’t like that Apple or Juniper can “steal” code, but I think it is great that they are able to “steal” code. It allows these companies who look at opensource very positively to exist (good luck competing against Microsoft and Cisco otherwise) and hire many software developers that the marketplace will not be able to afford otherwise. In turn, these employees get to work for great companies, enjoy great benefits, great salaries, and give back to the open source world in their free time.
Well enough blabbering and now onto NetBSD. In all honesty, I think NetBSD is a dead project. One person taking three months to work on SMP is not going to help fix the project. There are literally millions of lines of kernel code. To fix it, you will need a lot of hard work and a lot of dedicated people to do it. FreeBSD, which was about ten times bigger (now even more) when it started its SMP work has had a lot of trouble doing it and has taken over seven years to finally get somewhere so I can’t imagine the pain NetBSD will have to go through. With much more interesting projects going on in the BSD world (OpenBSD,FreeBSD, and DragonflyBSD), I think it is a crime to keep NetBSD going completely independent. I think the people should work towards merging their codebase with one of the other BSD projects and finally just merge projects. My suggestion would be Dragonfly. OpenBSD is basically one man’s vision and it will be hard for people to work with Theo as equals. FreeBSD is already too successful and already has a lot of developers. On the other hand, DragonflyBSD is unique, it is in desperate need of developers, and if successful, it could be a very strong counterbalance to the FreeBSD project in the BSD world to keep everyone on their toes and ideas flowing between projects.
I don’t think anyone was trying to say BSD or Linux is the better OS in the previous posts. It was more an explanation of why Linux didn’t do so well ( from smp performance perspective; handling multiple threads on multiple processor system ) in the graphs I linked to.
I actually found the Linux statements/posts informative and would definitely be interested to see actual benchmarking because I’d like to know which has the better smp implementation – BSD or Linux. Or to see if both OSes perform fairly similar in handling multiple threads/cpus, etc.
I believe the OS with the better smp implementation will make the better server, from performance aspect ( though reliability and security are also essential too ). I’m more interested in finding out smp performance and scalability ( mostly for FreeBSD vs Linux to see how they compare ).
When people (like you) say (again) about merging code base of BSDs, that NetBSD is a dead project… oh, nevermind.
I bet a lot of money you never read or used neither NetBSD nor the other BSDs, never did a “hello_world.c” or looked to a C code in your life.
Get a job, dude.
I already have a job that pays very well. Why would I want another?
Regarding merging, perhaps the wording was a little too vague. Of course, someone with any sort of industry experience will understand what merging codebases are. One would simply just look at Oracle “merging” Peoplesoft/OneWorld into Oracles ERP or Microsoft merging Axapta, GP, Navision into their Dynamics Line, or how they merged Windows 95/98 into their NT line. But perhaps I should have been more specific about merge since there are soooo many projects that have actually “merged” code like the way you talked about. What I probably should have said, and which is common sense for anyone who knows a thing about computers, is that the NetBSD people should announce publicly that they are merging with another project. Then they can grab some features from the other OS to get people moving to a certain direction. And while people are slowly moving that way, they can start heavily contributing to the other project so that it will be fully ready for the merger (that is step 1, the codebase merger). Once they are ready for the move, then they announce the projects are merged (step 2 in my original post.) Maybe I should have said that but then it would have been a really really long and useless post. Before posting another reply about merging code base on another forum(Oh, gosh, you mean I can’t just put all the C headers together and things wont automatically start working), take a second to think about things. Think back and try to remember the project that you were able to “merge” like they way you talk about? If you actually can think of a project, even one where you did not work on specifically, can you just tell us normal neanderthals whether or not running cat fixed everything automatically.
Oh, what a lot of shit. Developers are NOT merging NetBSD with another project, and they won’t. Pay attention to this: NetBSD, and the other BSDs, doesn’t has the same goals Microsoft or Oracle have.
Have you looked at NetBSD website some time? Development mailing lists? I don’t know why people like you, who knows nothing about the project, come here to make statements like these.
Most BSDs come about with a general purpose: FreeBSD focuses on *performance*; OpenBSD is concerned with *security*; NetBSD is all about *portability*.
I don’t think it useful for NetBSD to hire someone to try to fix the smp implementation. It’ll be awhile to get the work completed.
The better solution would be to use FreeBSD source code as the base. Then modify & change it accordingly to meet their specific goal. For instance look at PC-BSD which uses FreeBSD code but customizes it to make a much more user friendly version of BSD. ( add features, make nicer menus & easier to use, etc ).
If NetBSD would adopt the FreeBSD code and work on making it more portable then that would certainly be pretty cool ( benefit from smp performance & fixes of FreeBSD ). Unfortunately this is very unlikely to happen since NetBSD already have their code base to work with ( don’t think they’d be interested in using FreeBSDs ).
Edited 2007-07-26 02:09