As a programmer and manager of embedded software products for a living, I think that operating system programming is so much fun that it will eventually be outlawed. I’ve previously published two articles on OSNews, So, you want to write
an operating system and Climbing
the kernel mountain, and tried to summarize my experience in designing operating system kernels as well as technical traps that can be easily avoided.
You don’t know that you are wasting your time
I meant to write follow-up articles on the subject, but instead was sent by court order to Article Title School for two years. In the meantime, the world, and my perception of things – not just article titles – evolved. While I hope that someone, somewhere, found the advice helpful enough to start writing a kernel, I’ve realized in the meantime that the fun of doing that from scratch has – for all intents and purposes – actually been outlawed.
I hope I catch you before you burn any proof that you ever downloaded the Intel System Programming Manuals, as you look nervously through the window for the Code Police to come and send you to coder rehab. Cheer up, writing your own kernel is not really outlawed and you don’t have to move to writing accounts receivable software in C#. A lot of people with commitment, strong programming skills and free time are working on such projects as you read this, even though there is no point anymore. Some projects, such as Syllable and SkyOS, have made major progress towards a usable and stable operating environment and never fail to legitimately impress OSNews readers with their skills. The irony of such an opinion on OSNews is not lost on me. I expect kernel hobbyists to dismiss this article in the same way as I would have two years ago. Giving up rare skills that you built over a long time is not easy to accept. As a recovering kernaholic, I would love nothing more than being proven wrong by a strong, one-two punch demonstration, and to be conned into participating in a hobby OS project.
To be honest, I’m not holding my breath. Unless your only objective is to displace Robert Szeleney as the most admired underdog OS developer, your effort is probably wasted. I need to explain how I slowly evolved to this conclusion, and how this is a positive development. There is one piece of information and two colliding storylines that come into play.
How you can learn from being punched in the nose repeatedly
The piece of information first. I became a manager out of experience, not out of a Harvard MBA, so I like to hope that anyone with a decent business training will read the rest of the article and say “yes, so what?”. Well, pfft! to you – that’s the sound of the tongue sticking out. I haven’t been taught product positioning in school, or how to focus on what you’re really good at. I’ve learned by coming back from customers with a bloody nose.
Moving on to the first storyline. Last year, I seriously considered joining the Syllable project. It is the only desktop OS project on the market that I know of, that is at the same time usable, open source, and where I could make a significant contribution, unlike Linux for which an individual contribution is a drop in the ocean. The Syllable kernel has significant shortcomings. You can feel the round-robin scheduler and dysfunctional VM as you use the desktop. The lack of consistent primitives or clear notion of processes has obnoxious side effects such as the application server not closing windows of an application when it crashes. I thought I could help out instead of criticizing from my armchair. I did a lot of work on replacing the basic kernel with something that could support the rest of Syllable being dropped on top, and compete performance-wise with the Linux kernel.
Now for the second storyline. I created a company that develops, sells and promotes a software component architecture for consumer electronics products. The company has been around since 1998, and the product started out as software that I enjoyed writing – an operating system. Essentially, the product offered two opposed features. First, a component based operating system for consumer electronics products, the first of its kind, that lets you replace any system policy such as scheduling, memory management and power management, by your own. Second, a component model that basically transfers the well-known benefits of Corba, DCOM or .NET such as increased code-reuse and cleaner isolation, to the Consumer Electronics world. The re-use is a massive problem at the moment in the industry, since manufacturers have essentially moved from being hardware companies producing VCRs and analog TVs that had little custom software, to being software companies producing DVD-RWs and Digital TVs that require staggering amounts of custom software. We tried really, really hard to pitch our operating system to customers.
If they wanted the component model and the re-use benefits, they needed the OS, take it or leave it. Well, they left it, consistently, in Japanese, Korean, Dutch, French, English, and other languages. Whoops. We blamed the failure on a lot of things, but eventually, we faced the obvious facts.
One, the interface of a desktop, server, or embedded OS kernel has been refined over time to become a more programmable machine than the instruction set. It’s a standard conceptual interface. The same way you have the Intel and ARM instruction sets that support similar operations, you have posix, Windows and µITRON APIs for memory management and semaphores, but they are conceptually identical. If you offer radically different primitives, application programmers will not make use of it, for the sake of portability of either the code or their knowledge.
Two, a platform needs applications and a developer community in order to succeed. You have neither when you create a kernel from scratch. You could copy an existing design and API in order to have applications and developers, but it has already been done. Linux and BSD are free and fit almost any purpose.
Three, a new OS and code re-use are contradictory. For some reason, we couldn’t sell to our prospects that they could re-use a lot more code than they currently do, and could architect their software a lot better, by throwing all their code away and rewriting it for our OS. Our prospects had the nerve not to want to invest millions of dollars to rewrite software they already had.
Four, you can be the world’s best, or even really, really good, at only one thing. Everyone acknowledges this is true for a small to mid-size team, and I believe this applies to any organization, even the size of Microsoft or with Google’s coffers. Microsoft does a lot of things, all of which lose a remarkable amount of money on fire and motion, except for their operating system and tightly related core applications. Google’s managers won’t touch anything not clearly related to search. Management droids even have a name for this: the hedgehog concept. Hedgehogs are really stupid animals, but they are superior to many, in the sense that they only know one thing, rolling into a ball of spikes when attacked.
Fifth, there is no entitlement when you design something. Your users, whether they pay for a commercial product, or they use a Free/Open Source piece of code, do not care that you spent thousands of hours of really hard work to come up with what you offer to them. Syllable and SkyOS are really neat technical achievements, but they are less functional than Linux and are not attracting normal users in droves. Linux still has to figure out how to be better than Windows on the home desktop in order to win users over.
Armed with this reality check, my company repositioned its product away from the operating system and sharply on the component model and code-reuse. Essentially that meant porting our OS on top of the popular platforms in Japanese Consumer Electronics – Linux and TRON at the moment – then removing what wasn’t needed anymore. All of a sudden, customers started calling us to buy our product. By focusing on our one good idea, we were then able to make our component model cover a lot more needs, and today I think it is really a decent product. In retrospect, trying to shift them to a new OS had felt like pushing a rope.
Do I like working on what we sell now? Along with a couple of the programmers that were attached to doing kernel development, I really did mind that the market chose the least fun feature to work on. I couldn’t use my kernel programming experience anymore. Then, I realized that in comparison to writing kernel code, anything else is easier. It’s like training for a tennis match with weights and then removing them for the big game.
The skills you build for kernel development, the precision, rigor, testing, careful debugging, can be reused with nine times the efficiency when working on something above the kernel. You just have to find what can really motivate you and has a purpose to work on.
My company is not the only one that had this epiphany, it looks like. The Tao Group used to sell, guess what, an operating system for consumer electronics. You probably have heard of them as part of the Amiga saga. Their claim to fame is the Virtual Processor architecture, that lets you write portable but hand-tuned assembly code. Somehow they make this work. When the penny dropped, their OS, as a product, was taken out and shot, and now they are very successfully selling their one good idea. Their VP architecture allows very efficient, graphical content, that is portable across all CE devices. The JVM they designed on top of their VP architecture is mighty. They’ve found their one good idea and swept the rest under the rug.
Okay, so the second storyline was a bit longer than the first. If you don’t hear from me for the next two years, please send me care packages at the Article Writing School. Now, both storylines are high speed trains going towards each other and they just collided.
How you can make the world a better place
The epiphany about our product and the lack of purpose for yet another OS spilled into my intention to work on Syllable. In the end, I didn’t. The Syllable project has assembled a small group of talented people and I believe that they are absolutely right in trying to fix the lack of integration on a desktop that most distributions of Linux suffer from. Today, the user experience that the Linux desktops offer is best described as goofy. A well integrated kernel and desktop API, with a managed code approach to RAD such as what Mono offers, would make things a lot more consistent for the user. However, in light of the epiphany, my opinion is that the Syllable team is severely misguided about how to solve the problem. They want to be the best at such an integrated desktop while also having a good kernel, drivers and what have you.
Let’s ignore the typical kernel that is still stuck in “bootloader stage”. Most kernel projects start with an idea such as, let’s create an OS around this new filesystem thingie. Then, night after night, a small, expanding team of programmers duplicates scheduling, virtual memory, a posixish API, the windows registry, device drivers for ISA network cards and an USB stack. Two years later, the filesystem thingie is implemented. You have to commend the project developers for going that far, but, then that thingie is not as useful as it sounded, and the project is repositioned as a general purpose OS of sorts. Worse, it actually is useful, but nobody will ever benefit from it because their favorite application doesn’t work on this strange OS. Worse for the project, somebody steals the idea and releases it for Linux with great success. In any case, a lot of effort was spent in duplication, while producing nothing new to the OS world, or the users, you know, the ones that throw us a bone once in a while so that we can afford to keep having fun programming.
In terms of desktop or server operating systems, I cannot think of a single new idea that cannot be implemented as part of an existing OS. With the sourcecode of time-tested, free kernels readily available, and the previous statement that the basic kernel interface can be built on top of, there is no excuse not to experiment with the new idea as part of an existing kernel, or userland. For instance, I think that there is a lot of merit to having device drivers and other modules running as their own task, even a kernel one, and working asynchronously by communicating with the kernel exclusively through messages. This is a cool challenging project, and for the end users, it means they can unload and reload modules without any chance for failure, since you cannot have threads still left in the code being unloaded. The system also scales up on SMP a lot better than locking all over the place. You might be tempted to write a kernel just around that idea, but there is no justification not to patch Linux to work this way. It will be a lot of work to modify and re-test device drivers, but a lot less work than doing it from scratch. It can be done incrementally, always shipping an OS that works even though not all device drivers and kernel modules benefit from the new idea yet.
Guess what, the DragonFlyBSD project is exactly about that, and they started with the FreeBSD code base.
I’m a bit more familiar with embedded operating systems dynamics than servers and desktops ones. Their users, software engineers that work for consumer electronics giants, are really smart. They are a tough and rewarding crowd to sell products to. They have well understood that their customers, you and me buying basically the same DVD recorder every year, don’t care about some obscure edge that a custom OS would give them anymore, so they’ve standardized on OS platforms. It used to be commercial RTOS products, or free specs such as TRON, now the industry is pretty much migrating to Linux as the lowest common denominator for products. Even if your project is open source and free, if it’s basically a me-too kernel chasing Linux’s tail lights, nobody will care. Take Linux and apply your one good idea to it, and a lot more people will care.
I don’t believe there is a market, either in terms of paying customers, or in terms of people that will actually use your code as their daily environment, for writing a kernel from scratch anymore. This is depressing when you have mastered the art of architecting your wait queues, scheduler and semaphores so that everything is really efficient and maintainable. Any domain of human knowledge gets commoditized over time, this is good for everyone, and the experts have to learn to adapt their expertise to a higher-level problem set.
The good news is that the operating system is not limited to the kernel and the open-read-write-close interface anymore. A surprisingly minuscule amount of university research has gone into concepts that will extend an existing OS to develop applications quicker and with higher quality. Almost all of the inroads have been made by private corporations or non-funded open source efforts, and as modestly demonstrated by my company, there is room for a lot of new, really cool projects to work on.
As kernel developers, your skills are an edge over other people in terms of writing and delivering code to plug a hole that you have identified somewhere in the operating system. If you shift your focus to plugging it instead of duplicating existing and time-tested software, I can’t imagine how much better the software environment will become for all of us.
About the author
Emmanuel Marty is a founder and the Chief
Technical Officer of
NexWave Solutions, the supplier of the first commercially available component architecture for consumer electronics, in use at top manufacturers. He has been working with computers since the age of 10. Currently aged 28, he lives in Montpellier, France, with his wife and twin daughters.
If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.
The real reason why you shouldn’t write your own kernel:
Because you don’t like having to write your own drivers.
I actually agree with this. That’s something.
Yeah… write a kernel seems too much work, but always using the same one doesn’t look good either so I guess new kernels are only needed once in a while don’t you all think?
Also when writing a kernel you need to use low level languages like Assembly or C and they are really hard, so I guess a new wide used kernel will show up when we are able to code one with a high level language.
Also when writing a kernel you need to use low level languages like Assembly or C and they are really hard, so I guess a new wide used kernel will show up when we are able to code one with a high level language.
C and even Assembly are not hard, its just more detail that you have to pay attention to – especially assembly and the x86 with what registers can do what, etc…
Anyway, writing something like quake I back in the day that had to run acceptably on a 486 is much more elite than programming kernels – which this guy makes out to be the holy grail.
… i will continue on my OS – which is actually more than only a custom *kernel*. Why? I already get my hands dirty with day-to-day software development, so delving into kernel and OS land is something different(tm) and intrigueing.
I ‘ve got no hankering to reach any market with it nor will I ever attract any so called team mates to my project.
stay safe folks.
@emmanuel marty: you are correct, there is no need for doing it. But to *understand* whats going on behind the veil, it’s something essential. 🙂 having done such a thing f. ex. gives one a huuuge training in debugging and finding logical mistakes (which a compiler usually never finds).
ok, but how about writing your own compiler and/or language?
A solid and well reasoned article. Frankly, I don’t think you need to be brain surgeon to realize that reinventing the wheel is stupid.
Take a look at all the successful operating systems to date. None of them are original. All of them stole ideas, code, concepts, even kernels and kernel utilities from already existing projects.
All the operating system that tried to be original, that reivented the wheel, died. This isn’t rocket science, this is undeniable logic. Use what works today, and evolve from there. Don’t, in an effort to be original, innovative, different, squander your time, effort, skills and money.
Kernel gurus will be better of playing with already existing free kernels and specializing in a component they find fascinating, rather starting yet another kernel/operating system project.
I think the article misses one rather important point. Don’t write your own kernel if you don’t know what you’re doing. If you want to learn about systems/OS programming, maybe it’d be best to learn about it properly first (take a course or something). Then you will be better prepared to actually write a kernel, and probably skip a lot of the issues raised here…
What a great, coherent, well-argued article. There’s still hope for Osnews.
Thanks Emmanuel for the article and I hope that many will hear your wise words.
You must be god’s gift to humanity. I can’t wait for you to come and show us your brand spanking kernel with great driver support, because we all know that all of the Linux kernel developers did not go to school or learn how to do a kernel properly.
Many many OSes are hobby systems. Hobbies scratch itches.
For some OSxxx is about ruling the world; for others that is just excusing some itch scratching 😉
On the other hand, most hobby operating systems seem to start (and end) with a kernel. There are very few that seem to be ‘systems’ out there.
It seems to make sense to borrow kernels for hobby operating systems too, and build the libraries that actually constitute the ‘platform’ upon which applications might stand instead. It might make results come quicker. Or does that not scratch the itch vigorously enough?
You must be god’s gift to humanity. I can’t wait for you to come and show us your brand spanking kernel with great driver support, because we all know that all of the Linux kernel developers did not go to school or learn how to do a kernel properly.
I can’t quite tell if you’re trying to troll or not. What I was trying to say is that many people try to write a kernel and know little or nothing about systems programming in general. Is that a good thing? Will that mean they’re going to have an easy ride? Of course not. Maybe if they learnt a bit more about systems programming in general, they might find kernel programming a little better. Is that such a bad thing?
I think a lot of the Linux or whatever programmers would be well versed in kernel and systems programming. Otherwise would they be able to commit bad code to the kernel?
it hasn’t all been done before… and anyway the linux path seems to have narrowed… its branching factors limited…
there is is a need for new ideas and new ways of doing things … but we are running into the limits of the inflexible i386 architecture…
i would even suggest amateur kernel hackers to develop thiers within a sandbox… like a JVM or similar.. to reduce the amount of less-interesting work that is aside from the issue they want to experiment with…
t
The other point completely missed:
The number of people who are capable of writing a “functional kernel”, and interested in doing the same is so damn small. Most computer science grads wouldn’t have a clue where to begin.
A lot of nice talk, but I think that it boils down to “it’s more fun but less efficient to write new code than to modify existing one”, which can often mean “it’s more fun but less efficient to write new bugs than to fix existing ones”.
“I think the article misses one rather important point. Don’t write your own kernel if you don’t know what you’re doing.”
Did you even read the article? I think not.
The point of the article is that if you are qualified to do kernel hacking, do not write a new kernel. There is little point in doing so and the author cogently explains why. It is not a question of whether or not you know what you are doing. Read the article, instead of just commenting nilly willy.
Me thinks you must first RFA and then speak your wisdom about
Yes, I RFA, thanks so much for asking.
I did not get that same impression you had, unfortunaely. I got the impression that if you’re trying to implement some great new thing, don’t reinvent the wheel just trying to achieve it (http://www.osnews.com/story.php?news_id=8162&page=3, “filesystem thingie”).
I wish Linus Torvald had read that article, than there wouldn’t be some much fuss about Linux these days. Sorry couldn’t help it.
Why does everything in the world needs to be efficient, if someone likes to write kernels for fun, so what! A lot of inventions are made by people who were considered wasting their time, history proves that most critics where wrong in the end.
If you tie everything to an economy model, sure it will be considered inefficient but that’s the problem in general of society nowadays only profit counts.
Good ideas die because of economical reasons and that’s a pitty. I don’t wanna bring up the Betamax vs VHS discussion here but facts prove that free market models not always favor the technical most advanced technology.
Only when a technology is being developped by someone not under pressure to generate profit will flourish.
Point is that if someone wants to write his/her own kernel and has fun doing so, he/she should do so and not think about economical impact. If Linus had done so we wouldn’t have Linux now.
“Point is that if someone wants to write his/her own kernel and has fun doing so, he/she should do so and not think about economical impact. If Linus had done so we wouldn’t have Linux now.”
I thought I recalled that Linus developed Linux directly for the economical impact? Wasn’t he upset that minix wasn’t free nor free?
As an aside, I rather imagined my ‘filesystem thingy’ would be extending an existing OS, not an excuse to write a hobby OS to encapsulate.
When you have an OS, you make a choice in file system schematics: UNIX-like, VMS-like, MS-DOS-like, etc etc. Putting forward other ideas (Internet-like) is just scratching my itch 😉
We do still need an good and working POSIX micro-kernel with HAL and real Drivers Model (whitch could use codebase of the linux/bsd device-modules)
Generally one could “steal” a lot of the Linux kernel (like Scheduler, Semaphores etc.)
I would like this kernel be able replacing 70’monolithic-Linux-arch for good. I wanna run my system for more then fiew weeks (until a new kernel-version is out fixing 10000 bugs ans sec. issues, it sucks really!)
When Linus began writing Linux, there wasn’t a Free kernel available of the quality of Linux 2.6.
When Linus began writing Linux, the expectation of what a a kernel and the OS that sits on top were supposed to do was very different from what we expect of both today.
I think technical merit continues to drive the Linux kernel development more than any concern with profitability.
@ stew:
> The real reason why you shouldn’t write your own
> kernel: Because you don’t like having to write your
> own drivers.
Repetition #{I don’t really know anymore}:
There are cross-platform driver architectures readily available, like SNAP or UDI. Every single OS out there could work with a shared pool of device drivers. No reinventing the wheel, no hardware compatibility lists.
But Microsoft doesn’t embrace them because they would be hilariously stupid to give away one of their core advantages – drivers.
The FSF doesn’t embrace them because those architectures don’t fit their idea of “freedom” (or rather, because it would give away one of *their* core advantages – drivers).
@ Mystilleef:
> All the operating system that tried to be original,
> that reivented the wheel, died.
And many of them left behind new, original things that others picked up. Preemptive multitasking wasn’t an invention of MS-DOS, Linux / BSD, or AppleOS.
And much of what was left behind that died has yet to be picked up by someone. Much of this has fallen into forgetfulness, with only a few people still remembering the ease of navigating a file structure that could be understood by mere mortals, adding a .catalog in your native language the application programmer didn’t even know about, or backing up your OS to a ZIP drive with a simple copy operation and booting from that copy without further ado.
Windows, Linux, and MacOS suck in more than one regard, and that’s the raison d’etre for many OS projects out there. But the big three are very happy with their oligarchy, and don’t consider for a second of evening the playing grounds (see my driver comment above).
Either one of the big three finally gets the act together and produces an OS that’s actually easy, fun, and painless to use, or alternative OS projects are here to stay.
And, like all things, OS needs to evolve.
Yes, you don’t need to reinvent wheel. But if you just get the idea of plane and rocket. Please do it ! Even if all people say they prefer boat to go over water…
Do it for your own pleasure. If you think you GET an idea. Try it. That’s how evolution take place.
If we stick to what we have now, if we all wait that “some thing that will come for us”, then, in the end, OS dev will die. Like all things, OS dev needs evolution. Even microsoft knows this.
I think the real problem here is whether or not some project NEEDS to create a kernel.
If it is just a new FileSystem, why would you rewrite a kernel ?
You can write it on Linux, BSD, even Windows or Hurd if you prefer…
The same for UI. If you get an idea for a really new enjoying way of dealing with human-machine interaction, do it in front of GDI, GTK or QT…
Then, if you get a really new scheduler algorithm. If you know you think you can do a better I/O manager,
TRY ! TRY ! TRY !
Please help a smart project.
http://www.blueeyedos.com/
Face it the author missed a rather large point when he said lets build a new OS around this filesystem and then someone ports it to Linux with great sucess this is called INNOVATION and this is how new ideas are formed. the same feature may never become apparent or desirable in Linux until it is seen somewhere else–hobbist are how things progress after all corps and people trying to keep the enterprise customers happy waste all day making small tweaks to existing thing hobbist go out and invent something new. So I would have to say interesting read but 100% WRONG.
“Any domain of human knowledge gets commoditized over time, this is good for everyone, and the experts have to learn to adapt their expertise to a higher-level problem set”
_____________________
Nope, mathematics and physics don’t fit in that commoditized knowledge, they are always evolving. I am sorry for software researches if they cannot get out of the market world.
the article reads more like
“i thought i could make millions selling a halfbaked toy os with no drivers. we had no customers. now we sell on existing stuff as a reseller with out little custom whotzit included.”
the author must have been the only hobbyist os developer with his eyes closed to the reality of creating/selling an operating system.
“Either one of the big three finally gets the act together and produces an OS that’s actually easy, fun, and painless to use, or alternative OS projects are here to stay.”
I think most alternative projects (smaller than BSD, Linux) are about either trying less common internals like exokernels or just earning bragging rights–among other technical bits–rather than creating a friendly desktop, but I see your point. I also don’t agree with it.
If you have an idea that you want to come shipped with the distribution you use (like the friendly desktop you say you want), it’s best to contribue to an existing project than start a new one. It’s the most practical way of getting it done.
If you have an idea that would require the kind of overhaul that a project doesn’t want to go through, fork it.
If what you want is some small research OS to play with different kinds of internals (the kind of thing people don’t usually put on their desktop), it might be too small a project to direct anything current to its goals. Just steal as much as you can. Reinvent as little as possible.
If you want something so different that you think you should just start from scratch, really look into an evolutionary approach. If you still decide to go anew, realize you will likely be a martar–your project will die, only to pass on its itch-scratcher in 2-15 years to projects that do take the evolutionary path.
Back in the 80’s, every home computer came with it’s own unique built-in OS. I used to enjoy low-level assembly coding then. Then came the PC, and DOS was everywhere. Then Windows came, and it’s on about any PC now. Linux is growing, and I think it might overtake Windows (some day). And there’s the BSD’s, Solaris, Mac OS X etc. But: in terms of basic concepts, there’s not much variation, and: all of these OS’es have their own problems and imperfections.
I search the web regularly, and look for something better. There’s dozens of hobby OS’es, but all coded in C, C++ or x86 assembly, for PC-compatibles only, and roughly re-doing things already done. Projects that are REALLY innovative, are few, and these are the ones that are interesting. Many of these are semi-completed university research projects. But what I want, either doesn’t exist yet, or I didn’t find it yet.
Therefore I decided to start my own OS project anyway, and wrote a sort of “manifest” recently. If what I want, turns up tomorrow: great, that’ll save me a lot of time. If not: maybe I really am starting something that might live inside next decade’s computers.
Time will tell, and: actual coding is the LAST thing I’m planning to do. Explore concepts & ideas first, then design, re-design, re-think, and re-design again. When it’s finally time to implement stuff, use existing code wherever possible. We’ll see what comes of it. You’re welcome to check it out:
http://www.alwinh.dds.nl/tops/
JBQ summarized the article quite sharply. Wrt. other people’s comments :
1. This is OSNews, not GameNews.
2. There was no freely modifiable and redistributable, general purpose kernel when Linus started developing his own. I can’t speak for Linus, however if BSD had fully cleared up license issues 3 years earlier than it did, I believe that Linux would never have seen the light of day.
3. You can become an expert in memory management, scheduling issues and kernel hacker extraordinaire without writing a kernel from scratch that will end up being inferior in almost every single category. I believe that talented people like Ingo Molnar and Matt Dillon fall in that category. Your good idea, that nobody else has thought of, might reach people and help them in their daily lives a lot quicker.
4. I’m nobody to tell anyone what to do. People are free to work on a kernel if they don’t ambition to improve the state of affairs. That’s my personal goal; it’s not directly related to making money.
5. Mathematicians do not research the best way to do additions and multiplications these days, if that would be the “kernel” of math. The best minds work on higher-level concepts on top, usually way on top, of these basics.
6. @ Ceaser, although you’re free to formulate an opinion on a product that you don’t know about, Consumer Electronics products don’t use standard hardware for the most part, it’s designed for cost and custom assembled, therefore the customers write their own drivers. The OS we shipped is robust and has a much higher feature set (vs. footprint) than what our customers typically use, and that wasn’t the issue. The issue was to port millions of lines of code.
7. @ “de Selby”: martyr is exactly the right term.
Keep the comments coming (good and bad) 🙂
Solar,
I have nothing against alternative operating systems. It seems every alternative operating system is trying to achieve what the big three already have, perhaps with slight alterations and/or motivations.
My qualm with these alternative operating system projects is that they are eager to rewrite everything more stable, more tested and more trusted operating systems have had for years. In other words, they reinvent the wheel and don’t do a remotely impressive job at that.
Name one alternative operating system that performs one function better than, say the big three, and I will name a million the big three does better than the alternative. Forgive my hyperbole.
I see, for example, many alternative operating systems trying to recreate BeOS. So what do these hackers do? They write a kernel from scratch to mimick good ol BeOS. Yes, BeOS was solid for its time, but heavens sake, Linux and BSD have a better kernel than BeOS could ever dream of, at least today.
Why not just use those already robust, stable and tested kernels? You get an upsetting amount of drivers for free. You get to pick and tweak which scheduler suits your needs. You get pick and tweak which filesystem better purports BeFS. You get chances of easily porting must-have apps to your new project among other benefits.
But what do our friends do? They do what every geek worthy of his esteem will do. They rewrite the kernel. They magically expect drivers to fall from heaven. The kernel they write hasn’t seem any form of real life testing. The kernel is epileptic under high load. Aha, they forgot the kernel needs tools, applications, and a window system.
Instead of tweaking, modifying stealing already existing, well tested, well (ab)used, stable “free” tools, but God forbid! they rewrite those too.
Many of these so-called alternative system won’t boot from a floppy, don’t recognize your USB keyboard, wouldn’t operate above 1600×1200 resolution, wouldn’t recognize your printer, don’t have an office suite, and on and on and on. The operating system they are trying to recreate had all that and more.
These are functionalities users have come to expect in an operating system. So we say alternative operating systems need to exist because of innovation and new ways of doing things. I say name one alternative operating system that is doing something I can’t do on the big three.
Yes, it used to be that I indulged in alternative operating system just because they did something uber cool that wasn’t done anywhere else. Today, that is all but gone. By all means right your kernel from scratch, I just hope its an intellectual exercise and nothing more.
When the alternatives do something I can’t do on the big three, perhaps then it might raise some heads. Just don’t tell me you can’t recreate AmigaOS, BeOS, OS2, or whatever you were infatuated with back then with freely available tools and frameworks today. And don’t tell me to keep your new innovative idea untainted, you have to rewrite everything from scratch. But geeks will always be geeks. Real men rewrite it from scratch, right?
Meh, I’m rambling.
Mystilleef: “BeOS. So what do these hackers do? They write a kernel from scratch to mimick good ol BeOS. […] Why not just use those already robust, stable and tested kernels?”
* Haiku are using NewOS kernel IIRC (still, Eugenia has a point http://www.osnews.com/story.php?news_id=8114)
* Cosmoe http://www.cosmoe.com/ (goes so far as to port stuff from those other BeOS wannabies who are starting lower and lower)
Just being pedantic, sorry.
Emmanuel, think we could use a clarification about who you are addressing with this: strict hobbyists or next-big-thing-hopers?
This article should be titled “why you shouldn’t write your own kernel anymore __unless__ you innovate and develop the next generation”. Current kernels are far from embodying next-generation principles, and anyone who knows what they are doing should be encouraged to develop what goes next: e.g. amoeba, etc.
It would be nice if these so called ‘smart’ people could fix some of the problems with the current Kernel.
But I guess that will never happen, because they are to smart to relize it does NOT work right to begin with.
Another area that needs SEVERE work is laptop drivers, they don’t exist and until then you can forget it…
Why you shouldn’t write your own
Word processor
Spreadsheet
Text editor
web rendering engine
etc…
You can build or modify perfectly good components already.
But, please write your own
Mind reading user interface,
Symphonic arrangement toolkit
and so on
About BeOS recreated using a linux or bsdkernel. If this project was a commercial project I would say yes. Apple has done this and this must have saved tons of time and costs.
But if you have the choice to use a kernel which is more suited for the purpose it’s better to create it. In this case Haiku based it on newos and a such it’s not even from scratch 😉
Linux is a good kernel I give you that but if I need to compile my kernel to add support for another filesystem than it’s not suited for a new BeOS.
Having to hack and patch an existing kernel to make it behave the way you want doesn’t seem a clean way of doing stuff.
Look at car-engines, basically they are the same as 100 years ago only more and more finetuned during the years. But it is still old technology. It works, we know how it works and it generates money. This attitude however is holding back the introduction of more modern, environment friendly engines. Same applies to kernels.
“Projects that are REALLY innovative, are few, and these are the ones that are interesting. Many of these are semi-completed university research projects.”
I’ve noticed this. The last one to grab my attention was <a href=”http://www.eros-os.org/“>EROS, but I haven’t looked around since. What good ideas have you found?
If we didnt re-invent the wheel once and a while, we wouldnt have linux…
I appreciate the author’s comments, but the article is definitely looking at the commercial aspect. From that point of view I’d agree that it makes very little sense to write a kernel from scratch.
However there are plenty of folks out there just writing an OS for the hell of it. Not for fame, not for delusions of being the next Gates, just because they find it a fun way to spend their free-time. If we’re going to allow the philatelists to pursue their hobby then why deride the efforts of the hobby OS-devers?
Actually, “mind-reading” user interfaces, or actually, Brain-Machine Interfaces (BMI) are an amazing topic of research at the moment. There have been recent amazing breakthroughs in understanding our brain’s decision process, especially for moving our limbs.
http://www.cnn.com/2004/TECH/science/07/09/monkeymind/index.html
This is something that has REALLY new possibilites. Commercial ones (such as robotic limbs duplicating human limb movement in a human-hostile environment such as nuclear power plants). And more importantly, humanitary ones (allowing paralyzed people to use synthetic limbs).
I think that if I was given the opportunity to write software for this kind of project, I’d do it almost for free.
Linus didn’t have the luxury many of us have today when he was developing Linux. Developing a kernel approximately a decade ago is quite different from writing one now.
Linus’ excuse for reinventing the wheel is valid. He couldn’t find any free kernel to play with. He decided to clone one with available public documentation and by trial and error.
From the CNN link: “The monkeys used in the study had sets of fine wires, about the size of human hairs, surgically implanted in their brains.”
Shame. Wonder how far a hobbyist could get with less intrusive sampling and a lego mindstorms (argh) set..
As you know, the “claim to fame” of EROS is orthogonal persistence, i.e. the ability to pull the plug on your computer, power it back on, and find your environment exactly as it was, modulo a few minutes work lost, inbetween the last “checkpoint” and the time you pulled the plug.
On paper, this is a wonderful concept. There are two ways of doing it, (1) applications need to support it and you need to rewrite the universe, or (2) applications need not bother with it.
EROS claims to be in category (2) but in practice, applications need to be written to take advantage of all sorts of EROS-specific implementations of capabilities and whatnot, so in essence you have to rewrite the universe, or as they did, do a clever hack so as to support UNIX applications which in essence equates to saying that you could do modify a UNIXy kernel in the first place.
In practice, the world has scratched that itch, starting with laptops (the most exposed to the problem of power going out) by implementing a “freeze execution, store physical memory to disk, power down” strategy. This is not as elegant, but it works, and by not rewriting the universe, you can channel efforts on making that technique quick, use less disk space, etc.
On top of this, as it turns out, the EROS way is not that clean, even if they did manage to rewrite the universe better than the staggering amount of developers for existing platforms. For instance, some things need to be re-started when the system boots up, such as network connections.
Don’t take the latter point from me, but do take it from Shapiro himself:
http://osnews.com/story.php?news_id=5316
Hello Emmanuel,
Nice article, well done! … Say, weren’t you guy called
SunTech back in 2000 ?
Jean-Louis
You’re correct. It’s the same company, going on 7 years of existence.
Nice to see you guys are still around, altought you kinda focus-shifted, but for the best as you state in your article.
Cheers,
jean-louis
@ de Selby:
> Just steal as much as you can. Reinvent as little
> as possible.
You blissfully assume that I consider the GPL a viable license for my own project, which I don’t due to its restrictiveness.
> If you want something so different that you think
> you should just start from scratch, really look
> into an evolutionary approach.
Care to elaborate what you consider an “evolutionary approach”?
@ Mystilleef:
> It seems every alternative operating system is
> trying to achieve what the big three already have,
> perhaps with slight alterations and/or motivations.
Depends on what you call “slight”.
> Name one alternative operating system that performs
> one function better than, say the big three, and I
> will name a million the big three does better than
> the alternative. Forgive my hyperbole.
That comes naturally with being one of the big three, now does it? If I knew an alternative that could stand up to the big three, we’d have big four, now would we? And perhaps I’d even be happy with #4 and not try to reinvent the wheel…
> …but heavens sake, Linux and BSD have a better
> kernel than BeOS could ever dream of, at least
> today.
Again, depends. I consider the Windows kernel to be much superior for the end user since it doesn’t force you to compile in support for XYZ and then compiling all drivers anew because the kernel ABI changed.
> Why not just use those already robust, stable and
> tested kernels?
Because several of the things I consider broken in Linux are directly related to the kernel – like, not allowing for C++ code, not allowing for binary drivers, and having to be configured and compiled by the user?
> You get an upsetting amount of drivers for free.
*If* you are willing to put up with the Linux way of drivers, since the guys at the helm so eloquently deny any development into the direction of platform-independent drivers, which they – heralding “freedom of choice!” – could push like nobody else…
> You get chances of easily porting must-have apps
> to your new project among other benefits.
*If* I am willing to put up with a POSIX API, *if* you consider the sh**load of yet-another-way-to-do-it Linux-ish tools and their broken command line syntax a benefit…
> Instead of tweaking, modifying stealing already
> existing, well tested, well (ab)used, stable “free”
> tools, but God forbid! they rewrite those too.
Not few of them because they consider the GPL to be a non-option – there goes the wonderful kernel code base you’re referring to.
> I say name one alternative operating system that
> is doing something I can’t do on the big three.
I don’t really know where to begin. Proper addressing of removable storage devices, for instance (AmigaOS). System file names and directory structure understandable to mere mortals (again, AmigaOS). A system installation that is protected against rogue applications, properly seperates applications while allowing shared ressources, and allows clean uninstallation. (A concept in my own drawer.)
> By all means right your kernel from scratch, I
> just hope its an intellectual exercise and nothing
> more.
Actually, I re-focussed on writing tools and libs for OS developers that aren’t so damn arrogant about assuming everything has to be POSIX, Linux, and / or GPL. My own OS project was scrapped because it was so very easy to find dozens of smartasses chanting “great ideas!” which fled once it got to doing real work.
> When the alternatives do something I can’t do on
> the big three, perhaps then it might raise some
> heads.
I don’t count on it, since Linux successfully squashed the tolerance for anything that isn’t “free” and already does more than my-favourite-distro, regardless of what it might do *different*.
> Just don’t tell me you can’t recreate AmigaOS, BeOS,
> OS2, or whatever you were infatuated with back then
> with freely available tools and frameworks today.
AROS worked hard on trying to recreate AmigaOS with freely available tools and frameworks.
You are blissfully unaware of how *different* an OS can be. Linux – meaning the kernel – doesn’t fit all flavours, period.
> Real men rewrite it from scratch, right?
I would very much have preferred some way that would have allowed me code reuse. You don’t happen to know some framework that doesn’t just assume you are comfortable with the GPL, Unix-style processes, Unix-style file structures, and Unix-style everything?
> Meh, I’m rambling.
Yes, you are. Then again, so am I.
“You blissfully assume that I consider the GPL a viable license for my own project, which I don’t due to its restrictiveness.”
*BSD?
The author is mostly concerned with the financial/market-driven impact of deciding to write a kernel. Technologically, that’s actually a pretty poor reason to compromise on a potential kernel design and go with Linux. Linux is a monolithic kernel. There have been voluminous debates over whether Linux should have embraced a micro-kernel design. I don’t want to turn this thread into a debate between monolithic and micro-kernel design. That isn’t my intention. But it is my intention to point out that there really isn’t any one-size-fits-all kernel. Nor should there be. Also, it’s more than a little arrogant to say that we have found nirvana in kernel design — and we’re sticking to it. Linux has its place. But it shouldn’t be the only place.
Thinking evolves. Technology evolves. OSes evolve. Hardware evolves. Etc. I don’t care how few kernel writers there are on this planet. Not long ago, it was somewhat of a joke to insist that a college student could write a serious operating system — and we all know the result. Linux is not the ultimate evolution of the kernel. There will be others. I say great. Bring it on.
the sun rises in the East.
Duh!
If you have plans for writing a full, workable OS that others will actually use, then I’d agree with the author. But I’m puttering around with my own OS — http://www.karig.net/ — and I really don’t care if no one ever uses it. I’m writing this purely for my own enjoyment. I write code when I get inspired and post the results on the website. It’s simply a hobby. And even if I never really have a complete system for anybody to download and use, I can see from the webstats that others are at least looking over my site. 🙂 The page on unreal mode has apparently proved useful to some people; somebody on Mega-Tokyo recommended that page to somebody else in response to a question.
I imagine that there are others who have thought that writing an OS would make an interesting and challenging hobby — the kind of people (like me) who’d pay for a PDF on the innards of MMURTL (a real-time OS for the 386) and read it for fun. On the other hand, I occasionally see some breathless announcement somewhere that someone is in the pre-alpha stages of writing an OS that will (say) run Windows, Linux, and Mac software all on the same platform, or some such thing. You KNOW such people are going to be disappointed and give up.
That’s my secret, I guess — I keep my expectations low. 🙂 My little OS isn’t going to revolutionize ANYTHING.
@ Will:
>> “You blissfully assume that I consider the GPL a
>> viable license for my own project, which I don’t
>> due to its restrictiveness.”
>
> *BSD?
That’s still hunting a supersonic moving target with a blowgun (the kernel doesn’t stand *still* while I focus on other things, so I have to merge all over again when I return). I still would have to put up with POSIX-ish everywhere, drivers compiled into kernel proper and everybody just looking at how well he can recompile KDE on it.
And I still haven’t really found myself at ease with the BSD license. In its way, it’s even more confusing than the GPL (which is at least *clear* on what it wants from you), and while I know how it’s applied *today*, that doesn’t tell me how an attorney might tweak it *tomorrow*.
No, thanks.
Why write your own kernel, when there are ready-made kernels to use? Writing your own code is a pain.
Why build your own birdhouse, when you can just buy one at the store? Woodworking tools are expensive and a pain to use.
Why cook your family a meal, when you can just stop by the fast food joint on the way home? It’s such a hassle cleaning up the kitchen.
Why sing, when you can play a CD? Heck, even Rob Zombie has a better singing voice than you.
Why write yet another book, when there are already more books than anyone could read in a lifetime?
Why paint yet another picture, when their are already more paintings in the world than anyone could possibly fit on all their walls?
As soon as the last artist in the world puts away is paintbrush for good, the last novelist retires, people stop singing in the shower, and only designated professionals cook or build things, I promise then I will stop working on my kernel. But not a minute sooner than that…
Basically the author says “don’t write your own kernel, ’cause nobody is going to port their stuff over to your kernel”. Well, I’m happy I can still continue my work on ReactOS then (which aims to run unmodified Win32 binaries, both apps and drivers). Feel free to join us Emmanuel 🙂
http://www.forbes.com/technology/enterprisetech/2004/08/31/cz_dl_08…
and
http://www.eweek.com/article2/0,1759,1640521,00.asp?kc=ewnws083004d…
The Linux KERNEL is the least of its problems….
Excellent links from UN-biased sources.
`
Depends on what you call “slight”.
Slight as in how Windows is different from OS X and is different from Linux even though they all do the same thing at a fundamental level?
That comes naturally with being one of the big three, now does it? If I knew an alternative that could stand up to the big three, we’d have big four, now would we? And perhaps I’d even be happy with #4 and not try to reinvent the wheel
They are not the big three because they are God’s chosen son, you know.
Again, depends. I consider the Windows kernel to be much superior for the end user since it doesn’t force you to compile in support for XYZ and then compiling all drivers anew because the kernel ABI changed.
Yeah, it depends. Because I consider a kernel that permits strangers somewhere on the Internet access to key calls in its kernel, via IE6, which is also ingrained in the kernel fundamentally flawed. I also abhor having to reboot my system after installing software. There goes your superiority. Oh look, the windowing system is embedded in the kernel sweet! Now I can write apps that crash the system! Yuppeee! And you wondering why your system is rebooting randomingly. Duh, Lavasoft is telling me you have 300, of course poorly code, spyware running behind the scenes.
I’m sorry, but I will eat compiling support for XYZ for dinner than have to deal with the repugnant rubbish I go through on Windows. Tell me why I need to reboot the damned machine because I installed software? Or why I have to stop all running processes to install software? Don’t get me started on security, viruses, trojans, Active X, proprietary formats, ugh…I’ll pretend I didn’t you didn’t read that statement.
Because several of the things I consider broken in Linux are directly related to the kernel – like, not allowing for C++ code, not allowing for binary drivers, and having to be configured and compiled by the user?
That’s it?!? Are you for real?
*If* you are willing to put up with the Linux way of drivers, since the guys at the helm so eloquently deny any development into the direction of platform-independent drivers, which they – heralding “freedom of choice!” – could push like nobody else
I have no clue what you trying to brew here.
*If* I am willing to put up with a POSIX API, *if* you consider the sh**load of yet-another-way-to-do-it Linux-ish tools and their broken command line syntax a benefit…
I don’t know any operating system that have better command line tools than POSIX and UNIX. Where is this command line syntax utopia you talk about?
Not few of them because they consider the GPL to be a non-option – there goes the wonderful kernel code base you’re referring to.
Fine, go license a proprietary OS if that serves you better. We are given you an OS on a platter gold, but you hate gold, you prefer wood instead.
I don’t really know where to begin. Proper addressing of removable storage devices, for instance (AmigaOS). System file names and directory structure understandable to mere mortals (again, AmigaOS). A system installation that is protected against rogue applications, properly seperates applications while allowing shared ressources, and allows clean uninstallation. (A concept in my own drawer.)
And where exactly do the big three fail in this regard?
Actually, I re-focussed on writing tools and libs for OS developers that aren’t so damn arrogant about assuming everything has to be POSIX, Linux, and / or GPL. My own OS project was scrapped because it was so very easy to find dozens of smartasses chanting “great ideas!” which fled once it got to doing real work.
I figured. And I’m sure there probably free tools out there that already serve the purpose of your rewrites.
I don’t count on it, since Linux successfully squashed the tolerance for anything that isn’t “free” and already does more than my-favourite-distro, regardless of what it might do *different*.
You actually wanted me to pay for your favorite distro that does less than Linux? I get a car to take me to school for free. And here you are suggesting I pay for a bicycle ride to school. What, I’m I nuts?
You are blissfully unaware of how *different* an OS can be. Linux – meaning the kernel – doesn’t fit all flavours, peI would very much have preferred some way that would have allowed me code reuse. You don’t happen to know some framework that doesn’t just assume you are comfortable with the GPL, Unix-style processes, Unix-style file structures, and Unix-style everything? riod.
How? If you are smart enough to write a kernel from scratch, how difficult will it be to modify already existing free, stable, mature, robust, scalable, customizable kernels to meet your needs.
I would very much have preferred some way that would have allowed me code reuse. You don’t happen to know some framework that doesn’t just assume you are comfortable with the GPL, Unix-style processes, Unix-style file structures, and Unix-style everything?
I do, unfortunately, many of them are dead. The last successful one remaining is an embarrassment to computer science and software engineering in general.
For some reason many long term linux users don’t really care, Mr. Unbiased Sources. After years of using linux I use it because it works for me, and it works for me on commodity hardware. I used it before it had 1% market share, and I’ll continue to use it. The true joy of linux, and GPLed code, is that it can live forever. There is no Be Inc. to go under, there is no target of a hostile takeover. . . There is just something that works for me. I mean sheesh, why can’t people just be pro-whatevertheylike without resorting to ati-theotherguy tactics.
The article refered to the state of affairs (Linux and Windows) in which it was ‘un-biased’ in content.
It was not For or Against anything just stating facts backed up with stats and numbers.
http://www.forbes.com/technology/enterprisetech/2004/08/31/cz_dl_08…
The link above just covers what the current state is thats all and nothing else.
MAC
There’s no reason not to write the kernel, too, if you’re writing your own OS–particularly, if it is an exokernel, which is will lend greater flexibility than monolithic kernels or microkernels. Yes, even greater flexibility to drivers (sure, you might still find yourself writing them). With an exokernel, one could utilize far greater things from other OSes–even have other OSes run on top of it. Some, quite unmodified. Depends on what one wishes to do.
I see no real reason in this article not to write a kernel, save for one based on experiences that might have simply created a personal bias against them. There will always be someone, somewhere, writing an OS kernel. The prospect and the interest is too great for some to avoid it.
I do, however, appreciate the one movement started by some of trying to implement a kernel for all. But, some like me would rather avoid even the general licenses of some things. Open source is here to stay, but it too is not ubiquitous.
–EyeAm (author of NOVIO)
http://s87767106.onlinehome.us/mes/NovioSite/index.html
if they feel like wasteing their time then let them. as long as it does not harm you then just let them go their own way. maybe they will come up with something, maybe not, but there should never be anyone other then yourself that tells you what you do with your own time. that is the essence of freedom.
why should we care what os people use these days? or hardware platform forthat matter? with the net and good standards one can create files one place and read them everywhere else. this is why stuff like xml was created. a human readable filetype that can be used to create anything, includeing vector graphic files (svg).
the main point is freedom, freedom to choose whatever os or program that you want to use to do the job that you set out to do. this may make marketing people in every it company on the planet get gray hair but i dont care. freedom is the essence of a free market, freedom for both the seller and the buyer…
If IE exposes system calls how the hell can that be a kernel problem? it’s obviously an IE problem don’t you think??
Also the IE isn’t ingrained in the kernel and the gdi isn’t part of the kernel, it just runs on kernel space (different things).
None of things you say make up for the fact that linux lacks in terms of hardware abstraction, the fact that is lacks modularization, the fact that it doesn’t have a decent driver model, and the fact that any future development is bounded by aging posix standards.
“None of things you say make up for the fact that linux lacks in terms of hardware abstraction”
Then start with NetBSD.
“the fact that is lacks modularization”
Fixing it’s still easier than starting over.
“the fact that it doesn’t have a decent driver model”
But it has drivers. We can argue about how hard changing the driver model would be (both Linux and BSD have gone through changes), but without Linux or BSD you wouldn’t have drivers in spite of your driver model.
Even if changing from a monolithic kernel to a microkernel, I’d grab ahold of these drivers.
“and the fact that any future development is bounded by aging posix standards.”
Changing that is still easier than starting over. Isn’t that what some BeOS clones are doing?
Back in the cretaceous period the world was dominated by two kinds of creatures: lizard-hipped dinosaurs and bird-hipped dinosaurs. It was obvious that there was no need for a new type of creature, since those two were so successful. In fact, each type of dinosaur had analogous features with the other. File-based data storage, windowed GUIs, big honking teeth to rend the flesh of herbivores, they were going to dominate the gene pool for all time.
Of course, useless other creatures kept evolving. Flying things with feathers, marsupials with 4-chambered hearts, placental creatures who bore their young live and noursihed them from special glands. It was a waste of time for these genotypes to emerge, since the dinosaurs were all that you needed.
Bang. Who’s digging up whose bones 60 millions years later? Is emulating Linux or Windows on one of the OSes being written right now the only way a museum will be able to demonstrate how we used computers at the start of the 21st century?
The world only really needs one or two computers anyways.
I’m not saying that changing a current kernel or OS is sooo easy that anyone could do it on a weekend. I’m just saying it takes a pretty extreme case (such cases exist) for rewriting the whole thing (boot, kernel and its algorithms, drivers, userland) to be a time-saver. If that’s you, bunch of luck.
No, making Linux a microkernel and so modular that it can compete with Windows in that SPECIFIC area would be a rewrite.
Especially funny the guys that want to change from monolithic to a microkernel without touching the drivers. Very interesting concept, does not work however! As you’ll have to write code to manage the hardware access of the drivers in user mode, where you might at least need to adapt the drivers, and in most times you may want to run them in a real userspace environment, too.
And POSIX is aged enough to say, it should be done differently in nearly all areas. Device interfacing (replace device files (I saw /dev/null being specified by posix) with a component model like CORBA), kernel interfacing (replace fork() and a lot of other functions) etc.
CORBA does then fulfill a similiar purpose, however, it will do the job better, and on the way we can get rid of ioctl on the way, too, or at least least hide it in stub/skeleton code.
And how do you want to create a truely different OS on top of Linux? You might ship a different shell, a different gui system, but your base will be automatically posix like.
From what is seen on this thread, Linux has weaknesses (I guess I’ll get flamed for saying that holy Linux has any weakness) and Windows has some.
The Linux Kernel can’t really use binary drivers.
Windows as a whole has great potential for trouble (at least IE and Outlook, which are mostly not kernel problems, which I saw correctly stated out here).
However, changing the very basis at which the Linux kernel was designed will *NOT* be easier then writing a new kernel.
If you change what everything is build on, you need to change everything that is built on top of it, too, or you add an emulation layer, like Cygwin is for POSIX for Windows. However, would you feel comfortable with that in your OS something like Cygwin was designed on purpose although everything that you create for your OS was intended to be *native* for your OS?
“Especially funny the guys that want to change from monolithic to a microkernel without touching the drivers. Very interesting concept, does not work however!”
Of course you have to modify the drivers, but do you want to figure out how to interface with a few thousand devices, most of which I assume you don’t have? Linux and BSD, even in the most extreme case, serve as a good information repository for drivers.
If IE exposes system calls how the hell can that be a kernel problem? it’s obviously an IE problem don’t you think??
Ugh…the kernel shouldn’t allow that. Don’t you get it, IE is running as a priviledge process!!! Isn’t that crazy? Whether or not it is IE’s problem is irrelevant. Who wrote IE? Who wrote the kernel?
Also the IE isn’t ingrained in the kernel and the gdi isn’t part of the kernel, it just runs on kernel space (different things).
It is. That’s why you need to reboot the bloody system after you upgrade applications or even IE. Another clear and gaping design flaw. That’s why faulty applications can crash Windows. That’s why you need to close all running processes to install applications and then reboot after that.
None of things you say make up for the fact that linux lacks in terms of hardware abstraction, the fact that is lacks modularization, the fact that it doesn’t have a decent driver model, and the fact that any future development is bounded by aging posix standards.
Okay prof, questions:
1). How is windows hardware abstraction better than Linux’?
2). How is windows more modular than Linux?
3). What are the newer better standards Linux should follow aside from posix and why?
4). How does posix suck and why?
Still I doubt that you can test them all anyway!
How should the kernel stop IE from executing another program? IE should not accept commands like this from a JavaScript in a HTML file.
1. Compare DirectX and messing around at the same level for example with a sound card under Linux. Enough said.
2. Instead of recompiling the whole kernel with a lot of drivers, I install one driver.
3. Those have to be created, but nobody does as they are all on the “lets-improve-linux” trip.
4. Look at fork() one good example. Look at /dev/ in your Linux installation (is with other *NIX systems the same). LOTS OF EXAMPLES!
How should the kernel stop IE from executing another program? IE should not accept commands like this from a JavaScript in a HTML file.
No, friend, this has nothing to do with HTML and JavaScript. Ever heard of ActiveX?
1). You didn’t say jack!
2). Who says you have to recompile the whole kernel with drivers?
3). You didn’t answer the question.
4). What’s wrong with fork() and /dev?
Look I’m getting tired of arguing. Go write a new kernel if you want, who cares!
“Still I doubt that you can test them all anyway!”
I know, but I think I made my point. In my opinion, the drivers are the single most important bit of free code you can leverage. It might be as bad as porting to another OS, but by god I would do anything I could to steal these drivers!
I also think a complete rethinking of the API and so on is something will eventually have to be done. (Love BeOS!–could go ever further.) Maybe this is a case where rewriting is best. I’m not a kernel hacker, but I can’t help but imagine you could cannibalize boot code, scheduling, or something to save some time…
In either case, you have to admit that most projects try to be POSIX compliant to leverage compatibility and because they aren’t about rethinking everything.
And I think I’ve said “even in the most extreme case” one too many times.
Then deactivate it. It doesn’t bug normal pages anyway, as no one uses it.
1. I did, simply experience these two different APIs which have the purpose to give an interface, an abstraction to the hardware, and you’ll see that DirectX is the better one. And no FMOD under Linux is above the level that DirectX operates on.
2. How should it really work without reliable? At best with binary modules that I need for the right kernel version then.
3. I did, innovation has already been stalled too much.
4. fork in Linux is only a hack to make a bad system working at reasonable speed, it had to be extended to clone. The exec call replaces your process with a new one. Sense, Usefullness? No idea, but with fork() and exec() you can create a process that runs seperately. However why not give one single command that does not rely on Copy On Write?
And threading, where the fork relative clone comes into play. Having a process-only scheduling model, at the beginning you had to fork to simulate threads. Now with clone you get at least rid of the address space switches involved with fork, however, why not a clean task/thread model from the beginning?
/dev/ – all those device files are flawed abstractions.
Some devices have no meaning on reads, writes, and the rest is all done via messy looking ioctl code.
The first thing I would split is kernel and drivers.
Then I can try to port those drivers to my OS, however, they may get major changes in interfacing.
“/dev/ – all those device files are flawed abstractions.”
You deny that Unix is God? You corrupt the children! (Wonder what you think of Plan 9…)
You don’t want to know!
No, after reading a bit in a Plan 9 I don’t know what is really up with that. The Unix concept taken a step further. So what?
Nice article, dude. Enjoyed reading it…
So this is what you’ve been up to on those long flights.
Catch ya later.
PW
The article made several good points, in reasonable, well articulated language. There’s nothing new in the advice, it’s all been said before, and most people will ignore or misunderstand it. But I appreciate the attempt.
Projects must have a goal, if they are accomplish anything worthwhile. A narrowly defined goal is likely to lead to a focused project with useful results. A broad, ambigious goal is likely to lead to a project galloping off in several directions at once, with little to show for the effort.
If your goal is to build a better desktop, focus on the desktop. Don’t get distracted into building an OS when you can use or adapt an existing one. Even a desktop is too large to bite off as a single project; perhaps it would be better to build an application or game as a demonstration of your UI ideas. If you insist on creating a new language, compiler, kernel, drivers, window manager, and UI toolkit, you aren’t likely to have a better desktop to show.
If you want to learn about kernels, then go ahead and write one. But adapting an existing one may teach you more. Reading other people’s code may teach you more than writing your own, although it’s less fun. Learning as a goal is reachable; creating a kernel that anyone wants to use is less so.
Do one thing well. Then do the next thing. You will soon have something to be proud of.
Try to do everything, and you will have nothing.
Okay prof, questions:
1). How is windows hardware abstraction better than Linux’?
2). How is windows more modular than Linux?
3). What are the newer better standards Linux should follow aside from posix and why?
4). How does posix suck and why?
1) Uh? because linux doesn’t have an hardware abstraction layer, each driver has to take care of its own stuff (like interrupts etc), the drivers themselves are the abstraction layer, the linux drivers have to deal with interrupts, locking etc, while windows drivers don’t have to
2) Because instead of being a big monolithic blob it’s divided in multiple components.
http://tutorials.findtutorials.com/read/category/97/id/379
3) and 4) because i don’t like how the posix standard deals with things like threading and asynchronous i/o, and all the lame hacks posix has picked up over time to remain usable. new standard are needed
Article: I hope I catch you before you burn any proof that you ever downloaded the Intel System Programming Manuals, as you look nervously through the window for the Code Police to come and send you to coder rehab.
Well I have downloaded them… heh.
Article: Most kernel projects start with an idea such as, let’s create an OS around this new filesystem thingie. Then, night after night, a small, expanding team of programmers duplicates scheduling, virtual memory, a posixish API, the windows registry, device drivers for ISA network cards and an USB stack…producing nothing new to the OS world, or the users, you know, the ones that throw us a bone once in a while so that we can afford to keep having fun programming.
You’ve got a point here. But then, I would never intend to do such a thing. Yes, this is what Syllable, Linux, SkyOS and Co are doing. But who wants a a POSIXy API? Who wants a ‘filesystem thingie’? And who wants to write it all in wonderful C… Not me. I am not interested in re-implementing Unix. The whole point of why I bother writing ‘system code’ is to find a different and better way of doing things. I hate kernels and operating systems. Everything I need should fit on a floppy disk, and yet be ‘modern’ and do everything OSs can do today. Instantaneously. People should NEVER have to wait for computers. These are not impossible requirements.
Article: I don’t believe there is a market, either in terms of paying customers, or in terms of people that will actually use your code as their daily environment, for writing a kernel from scratch anymore.
Yes, Microsoft owns it.
Article: The good news is that the operating system is not limited to the kernel and the open-read-write-close interface anymore. A surprisingly minuscule amount of university research has gone into concepts that will extend an existing OS to develop applications quicker and with higher quality.
Yes. I think we can agree. Don’t write any more Unix-a-like kernels! If you write new system code, do it to be better, to be new and different and to advance the state of the art! To do things which would be *impossible* on Linux and Windows!
Also when writing a kernel you need to use low level languages like Assembly or C
Not necessarily. There have been operating systems written in Lisp, although you probably still need a bit of assembly here and there.
“‘Also when writing a kernel you need to use low level languages like Assembly or C’
Not necessarily. There have been operating systems written in Lisp, although you probably still need a bit of assembly here and there.”
You could go back to Multics. It was written in PL/I with just a little bit of assembly. There are even some now written largely in Java.
@de Selby:
I also thought EROS looked interesting. Some paper on part of it having been verified (PROVEN correct, this is rare in OS’es) caught my attention. But I also like simple/small. Source code is 60+ MB. (base system only!), and that killed it right there for me. Also I’m not sure I’d want to have persistence as a core OS feature.
Example: you know the Japanese game of “Go”? No game you play is the same, and this boardgame can be fascinating at times. Yet, it uses only black and white pieces, a board with squares (different sizes possible), and all rules can be explained in 10 minutes or so. It’s this kind of simplicity that I’m looking for. Complex system, but very few simple core concepts. Like fractals: fascinating patterns, but a one-liner describes the essence.
Start from scratch, or modify existing code? That depends on what you want, and what is the best way to get there. If you need something with 1000’s of functions/features, starting from scratch could cost you years of coding. But if you need something for single function/simple device, firing up an assembler and start at memory location #0 might be better way.
I view existing code mostly as “sample implementations”, showing you how you COULD do things. Use it if it suits your needs or speeds up your project.
What else? Read my blurps: http://www.alwinh.dds.nl/tops/
and eehh… Google is your friend.
“Also I’m not sure I’d want to have persistence as a core OS feature.”
Me either. I just ignored it and looked at the capability security, the message passing system, and other ideas along those lines.
“But I also like simple/small. Source code is 60+ MB. (base system only!), and that killed it right there for me.”
That’s one of the things I like about Plan 9. The name spaces, everything as file, and sans-‘kitchen sink’ among other things makes it pretty small.
Also, the <a href=”http://www.star.le.ac.uk/~tjg/rc/“>shell is one of the cleanest I’ve used on a Unix-ey system. (Not a huge fan of Bash or the mountain of tools it comes with.)
Is the discouraging of innovation ever a good thing? This guy paints a rather gloomy picture of the future, if all we will ever have is what we have now.
“Is the discouraging of innovation ever a good thing? This guy paints a rather gloomy picture of the future, if all we will ever have is what we have now.”
That’s not what he’s saying at all. What he’s against is just doing the same thing over and over. He wants new things to happen, but by a lot of reuse of public code for everything else–so things might actually get done. It’s an effort at practical innovation.
One person might experiment with windowing systems, but use a public kernel and base system. Another might use a public base and window system, but play with parts of the kernel. Each in their own seperate project.
Maybe that’s not the most accurate way to put it, but I think that’s the spirit of it.
It is however hard to swap from a stable in-house kernel to linux. But it has to be done.
> Linux and BSD, even in the most extreme case,
> serve as a good information repository for drivers.
You still don’t get it, right?
Linux drivers are *worthless* for other OS developers, unless all they want to do is yet another Unix clone under GPL.
Think about that for a minute.
I can’t take the drivers as binary since the very architecture of Linux requires drivers to be in source form and recompiled every once in a while. I can’t use them in source since my whole kernel would then be forced under GPL, and I would have to mimick the LInux kernel architecture.
I can’t even use them as reference due to the GPL.
Linux drivers are *worthless*, and the FSF goes long ways to ensure it stays that way.
1) Uh? because linux doesn’t have an hardware abstraction layer, each driver has to take care of its own stuff (like interrupts etc), the drivers themselves are the abstraction layer, the linux drivers have to deal with interrupts, locking etc, while windows drivers don’t have to
Ever asked yourself why that is so? Well because the kernel is proprietary (read:closed source) so Microsoft has to prepare a layer to prevent anyone from playing directly with the kernel except themselves. HAL ISN’T designed for your convenience, it is designed to protect Microsoft. Go figure!
2) Because instead of being a big monolithic blob it’s divided in multiple components.
http://tutorials.findtutorials.com/read/category/97/id/379
Oh God. You’re clueless!
3) and 4) because i don’t like how the posix standard deals with things like threading and asynchronous i/o, and all the lame hacks posix has picked up over time to remain usable. new standard are needed
I don’t like cars! I don’t like them because of how they move and how they have engines in their hood and the lame hacks car manufactureres have had to pick of time to make them usable. We need a new mode of transportation.
Weak.
BSD.
Linux drivers are NOT worthless for use by other OS developers. License considerations may prevent them from directly using the exact code, but they probably can’t use the exact code anyway, due to architectural differences.
The value to other projects is in the documentation of the interface to the hardware. They’ve got a working example of what structures and values are needed. These are precisely those elements of the code which copyright does not cover. So other projects can use the Linux drivers as examples of what the driver must do without in any way being touched by the GPL.
Reverse engineering source code beats reverse engineering a closed binary driver. Copyright covers expression, not ideas. If you write your own expression of the ideas learned from Linux or Windows drivers, you own the code. You can apply the license of your choice. It isn’t covered by the GPL or the Microsoft EULA.
Even if porting (or reverse engineering) Linux drivers puts them under the GPL–and hac says it doesn’t for reverse engineering–didn’t you say you wanted to take a microkernel approach? I fail to see how that would be a problem. If you really feel the need for a closed source kernel, your drivers can still be open and not cause any legal issues.
I’ll remind you that BSD is a very open that you can steal their code and put it into your closed product with little or no legal fuss. And NetBSD has the hardware abstraction you want.
“With the Modular Portability Layer, the driver is completely isolated from the hardware platform, I/O instructions or no I/O instructions, interlocking, retry error recovery, bounce buffers, memory type boundaries, scatter/gather maps in host bridges, even peripherals which use pseudo-dma to write a buffer RAM with host CPU copyin and copyout — all are transparently handled beneath the driver layer. Moreover, several embedded systems using NetBSD have required no additional software development other than toolchain and target rehost.” — http://www.wasabisystems.com/gpl/linux.htm
One of the promising micro Linux kernel OS that got no where:
http://www.mklinux.org/
-magg
> > 1) Uh? because linux doesn’t have an hardware
> > abstraction layer, each driver has to take care
> > of its own stuff (like interrupts etc), the
> > drivers themselves are the abstraction layer,
> > the linux drivers have to deal with interrupts,
> > locking etc, while windows drivers don’t have to
>
> Ever asked yourself why that is so? Well because
> the kernel is proprietary (read:closed source) so
> Microsoft has to prepare a layer to prevent anyone
> from playing directly with the kernel except
> themselves. HAL ISN’T designed for your convenience,
> it is designed to protect Microsoft. Go figure!
Interesting notion. Weird, freakish, and altogether wrong, but interesting.
To give you a hint: When I write a driver, I want to write a driver, not “play with the kernel”. Ever heard about the software engineering goal of “minimal interfaces”?
Do you really want to tell me that writing drivers against a defined, static HAL interface is not more convenient for the driver developer than writing against the latest incarnation of whatever non-interface the current kernel provides?
Some people really aren’t in a position to talk about OS design…
@ de Selby:
> BSD.
Enlighten me:
> Redistribution and use in source and binary forms,
> with or without modification, are permitted provided
> that the following conditions are met:
>
> Redistributions of source code must retain the above
> copyright notice, this list of conditions and the
> following disclaimer.
>
> Redistributions in binary form must reproduce the
> above copyright notice, this list of conditions and
> the following disclaimer in the documentation and/or
> other materials provided with the distribution.
That means whatever I write that’s based on BSD code falls under BSD, and must hence be made freely distributable, right? Except for it doesn’t necessarily expands towards anything linked with it.
As opposed to the little shootout earlier, this one really has me confused, and I would welcome any advice that is rooted in more than just personal opinion.
If the HAL makes it easier for Microsoft to keep their kernel sources or not, an open source HAL for Linux would have it uses, too, to keep the code above the kernel clean and especially it would allow it to be independent of the kernel development (unless a new version of the HAL comes out, which should not be necessary as often as kernel updates).
Looks like the next time I install a *NIX-like OS, I’ll try a BSD and no longer Linux.
The Windows 2000 chart is perhaps not the best (as Windows 2000 in terms of security design is a bit weaker then NT 4.0 and 3.51). There the executive layer did not run in kernel space.
Because a hardware abstraction layer is not needed! Linux has more driver support for all sorts of hardware than any other operating system in the world, except NETBSD, and guess what even without a HAL, it works!
HAL doesn’t magically solve your driver woes. It’s great theoretically, but theory and practice are hardly always analogous. Same thing with the Micro-kernel vs Mono-kernel BS.
If you don’t want to use Linux for political, religious and non-techincal reasons fine. But don’t tell me crap like it is technically inferior to Windows because Windows has a HAL. I don’t need to dig far to show you flawed Windows is by design (e.g. apps being able to crash the system), and in fact I have already done so in this thread.
You need to differentiate between the design of an OS and the implementation of an OS. Windows is implemented suckingly, as a result you see the bugs all over the place. And in Linux there are bugs that allow you to crash the system, too, till they get fixed (was somewhere in the i’X (a german computer magazine) that I have around here atm). Now of course these are fixed. And at the same time there was an attack that had worse effects the WinNuke had on a windows system at the time it was around, however somehow it never got that popular (not enough Linux systems).
And with a clean design (for example a single, TINY piece of it would be a HAL) Linux should have needed less effort to support all that hardware, as you save time adapting a driver for kernel version, kernel version and another new kernel version. That time could be used to implement other hardware, or have a life …
> Because a hardware abstraction layer is not needed!
Bull!
> Linux has more driver support for all sorts of
> hardware than any other operating system in the
> world, except NETBSD…
Blah, blah, blah. So a great many people put up with the architectural deficiencies of Linux. Nothing new there; if people wouldn’t, nobody would *know* about Linux, now would they?
> and guess what even without a HAL, it works!
Yes, and cars drove without differentials and catalysators and airbags and steering assistant and safety belts, and yet still we invented those things.
> If you don’t want to use Linux for political,
> religious and non-techincal reasons fine. But
> don’t tell me crap like it is technically inferior
> to Windows because Windows has a HAL.
I tell you that it is technically inferior to Windows in this regard, and many others, too. Then again, it is superior to Windows in many regards. And then there are other operating systems that are superior to either one in yet other regards.
And as long as there isn’t an OS that brings it all together, we OS developers will strive for it.
> I don’t need to dig far to show you flawed Windows
> is by design (e.g. apps being able to crash the
> system)…
As opposed to drivers being able to crash the system, which isn’t really better. I have yet to see Windows booting into a black screen because of some driver failure, as I had encountered many, many times with Linux.
You are so full of your oh-so-great Linux that you simply won’t accept the idea of Linux being flawed anywhere, and it’s this shitty attitude that’s another reason why many OS dev’ers rather start from scratch than putting up with people telling them the broken way is better because. Google for the kernel-space-C++ thread from the LKML. The word is xenophobia. Or egomania, if you will.
What I find very funny is that you’re criticising quite harshly other people view on OS and at the same time you’re not able to read correctly the BSD license which is a license which basically gives you all the freedom you may ever want..
Don’t you know that at some time even Microsoft used BSD code in their software?
About the HAL vs no HAL, you appear very biased when apparently you think that having a HAL has only benefits and no inconvenient..
This is not a religious issue, but a technical problem: having a HAL has many advantage but it has also drawbacks, as it may be inefficient if the abstraction doesn’t suit anymore “naturally” the hardware, or if the kernel evolve fast..
Linux developpers have chosen the no HAL road, currently, in the efficiency name, that’s their choice, what allows you to give critics?
Except that you also seem to have a big ego..
I fielded a very valid question regarding the BSD license, and not for the first time, and I have yet to recieve a good answer to that question that’s founded on knowledge and insight instead of opinion and common practice. Until then, I redirect any remarks in the class of “can’t you read” to NIL:.
Does the BSD license make derived work fall under the BSD license? (I understand it does.) Does that mean that such derived work is inherently freely redistributable? (I understand it is.) Until someone convinces me I can base my work on insight gained from BSD code and then release it as either Public Domain or proprietary license, BSD code is not a viable option to base my kernel upon.
In my book, Public Domain is free, everything else is “strings attached”. Basic building blocks should be released as PD, everything else should be up to the author. If it isn’t PD, don’t claim it’s “freely available”.
> Linux developpers have chosen the no HAL road,
> currently, in the efficiency name, that’s their
> choice, what allows you to give critics?
I am allowed to criticise the very moment someone pushes that decision up my nose and tells me it is “better”, for Linux and every other conceivable OS (including mine). In such a situation, I tend to go into counterpoint.
See? As opposed to Mystilleef, you didn’t push “no HAL” as being superior per se into my face, and lo and behold, we could calmly talk about pro’s and con’s, and part again without having to have the other one convinced of a different point of view.
I don’t criticize Linux architectural decisions. It’s not my OS, so it’s not my job to decide. But I believe the OS of the next century should look very much different, from the ground up, and as such, I am allowed my own opinion of whether I consider those decisions to be “good” or “bad” in regard to other OS’, am I not?
> Except that you also seem to have a big ego.
I am vocal about my opinions; partially because I think I know what I’m talking about, partially because I have a knack for writing. So what? Are you only allowed some kind of ego if you are the author of Emacs, and do you have to have written another big editor to criticise the author of Emacs?
😉