The HURD was meant to be the true kernel at the heart of the GNU operating system. The promise behind the HURD was revolutionary — a set of daemons on top of a microkernel that was intended to surpass the performance of the monolithic kernels of traditional Unix systems and in doing so, give greater security, freedom and flexibility to the users — but it has yet to come down to earth.
I believe that Hurd is doing very bad management of its very limited resources. From what I understand the lack of uKernel is not the main obstacle. But they could use present technologies
1. OKL4 and dump mach.
2. Reuse syllable implementations of Posix API and merge to increase resources.
3. Lock on Linux DDE Kit
4. Reuse components from smaller projects or open source corporate projects despite licencing.
5. Improve web presence
6. Aim at today, not 100000AD
7. If gcc does not provide capabilities, postpone it for later. Genode OS works without them.
When people have few resources they adapt the problem at hand. This is how OSS works. In this case I see a corporate attitude and more specifically the bad side of it. Ten years later we will do the same questions.
Edited 2010-07-01 10:28 UTC
Corporate attitude would have aimed at a somewhat short term release. Some sort of deadline or target timeframe. This is another attitude I can see no similarity with, not even in research.
After reading the three pages of the article, I still don’t see why HURD should be something to spend resources on. That’s just me but are they clear about their objectives and motives? Because twenty years in technology is still not a whole geological era but it’s a long time.
“The GNU/Linux system is catching on somewhat more now. The system is becoming popular for practical reasons. It’s a good system. The danger is people will like it because it’s practical and it will become popular without anyone having the vaguest idea of the ideals behind it, which would be an ironic way of failing”.
Thats why. Linus doesn’t like the FSF ideals, and is completely opposed to pushing their agenda, so Stallman is in sort of a strange position; his software and software license is now being used by several orders of magnitude more people then it used to be, but it is being done in a way where the ideology comes optionally, which for him completely misses the point. The whole ‘GNU/Linux’ naming sillyness is an effort to combat that, but I am sure they would rather not have the most visible developer of the largest gnu related ecosystem always talk about how he only went with the GPL to enforce the whole “share and share alike” thing, rather then their whole “proprietary software is evil” thing.
Fine, I understand that. But isn’t it (the defense of the ideology) a little insignificant with respect to the massive undertaking that building an OS (more precisely a non-Linux-based OS) is today?
What would anyone but the FSF gain if that way was actually pushing the ideology instead of making it optional?
I’m not the most passionate hardcore supporter of the free software movement and I can’t stomach using Linux for more than a few hours a month. But Linux is free, it is mainstream (Ubuntu?), it encourages people to contribute and it opened the eyes/minds of some (like me) to open source, security issues, diversity, customization of software, etc. It deserves better than a lukewarm “It’s a good system”. Even I can find more positive adjectives about Linux.
It also has its weird sides, like a million distros, incompatible stacks, competing projects with the same features and goals, which always make me wonder why they don’t join the efforts and ideas (hence my previous “I still don’t see why HURD should be something to spend resources on”).
Linux changed the world, maybe not as much as the iPhone 🙂 but it did serve the free software movement and the community, users and coders, pretty well. Only, not Stallman.
I’m with free and/or open software. But I’m not with “all software is free and open and only that” because, as I’ve said in another comment, software engineers, artists, etc. also have rents and mortgages to pay. Just today, I thought about this article http://osnews.com/story/23494/Profiting_From_Open_Source_-_Without_… and wondered how the IntelliJ people manage to survive when their own Community edition, Eclipse and Netbeans (there are probably many more such IDEs) are there and of such high quality.
How can Linux being liked because it’s practical be “dangerous”?
When he says “which would be an ironic way of failing”, he’s talking about failing to rally the larger crowd behind his ideology in its strictest sense, right?
It would be ironic, you have to admit. Look at the history. By itself, the Linux kernel may have well ended as a student’s obscure pet project. It was the GNU userland and the GPL license that combined with it to make it what it is today. And those were both Stallman’s brain-children.
I dig what the man says. The public at large will always be just consummers. They take stuff and use it and spare no thought for what went into making it. We all do it, to some degree.
One day Linux will indeed become mainstream (some might say that day has arrived) and people will use it just like they use Windows or Macs, without even knowing what FOSS is and how it changed the world. For someone who was there during the 90’s, it’s extremely ironic.
There’s examples in the world all around us. Think about all kinds of human rights that people fought and died for, and how we just take them for granted.
Edited 2010-07-01 14:16 UTC
if it wasn’t for gnu, linux users would probably have a bsd userland.
LOL… actually if it wasn’t for Linux GNU could have relicensed BSD as the userland funny that Minix is growing far faster than HURD is.
HURD really has no real goals though…
Minux == secure and stablility through u-kernel
Linux == fast server OS
Solaris == fast server OS and Workstation OS
Windows == works where it counts catches the customer’s eye
Haiku I think has great potential due to it’s desktop/workstation focus… sylable too with its split design custom desktop kernel and Linux for thier server platform
Actually, no:
“I think it’s highly unlikely that we ever would have gone as strongly as we did without the GNU influence,” says Bostic, looking back. “It was clearly something where they were pushing hard and we liked the idea.”
Source: http://oreilly.com/openbook/freedom/ch09.html
Linux has become mainstream, however real irony here is people rarely use the GNU userland tools despite often using Linux on a daily basis.
Linux is on our phone, in our routers and on our sat-navs – but each and every time users Linux is buried so deep behind layers of corporate developed -and usually propitiatory- userland tools that it’s easy to forget just how widespread the OS is.
i probably use grep, find, ls, cp, rm (and curl, but not sure if thats gnu or not) on a daily basis… and thats it. i would be curious if IT guys use more of the tools, because as a dev, I would rather just write a quick script or one liner in a more general purpose scripting language (like ruby), then stuff like awk, groff, etc.
mkdir, top, tail, head, less, chmod, chown,awk, cut, w,xargs,tar, cat, ps, netstat, dc
amongst others.
One liners are cool too ( but, really only in perl. Ruby’s not poetic enough for my tastes), but sometimes gnu tools with some bash glue is faster.
so i forgot mkdir and less, those are probably daily too. xargs is probably a few times a week, chmod/chown and tar, those are probably every few months for me. i had to man dc, w, and cut, hadn’t ever run into them (cut especially seems especially useful, going to have to keep it in mind)
I’m not as bad as others I work with that will stream a line of cuts, awks, dc, heads,wc together to filter logs for info.
Generally, if I do whip out more than 6 piped commands together, I’ll save it as a shell script.
So you (and Bill Shooter of Bul) never use GNOME either?
Oh, it’s not part of the base system, but it’s still GNU.
As is BASH and GRUB (and it’s pretty hard to escape GRUB on any PC – be it laptop, desktop or server – without switching to the now not so common LILO
Edited 2010-07-02 09:07 UTC
didn’t realize that, you learn something new every day i use zsh instead of bash, but grub is definately there.
GNU userland is not just rm, grep and awk. It’s also ld, glibc and gcc. The linux kernel does not compile without gcc. Linux depends on GNU and all the GNU system depends on glibc (sockets, i18n, pipes, I/O, users and groups, processes, etc). The kernel is a very small component.
thats a good point, from the linking perspective its all gnu
The world would have just used FreeBSD.
This is really what bothers GPL fanatics about FreeBSD. Its existence disrupts their theme of Linux being The Chosen One (cue Star Wars theme).
Computing would actually be farther ahead if FreeBSD became the de facto free Unix due to it having a sane development structure and no holy war against binary drivers.
I remember reading in 1999 about how Linux was going to take over and destroy Apple.
Windows Server was supposed to die as well.
Linux on the desktop has not only failed but it has already missed key opportunities to take significant share (instability of Win 9x and early XP, 9x to XP transition, security issues with XP, 64 bit transition, Vista released as a beta).
I believe the main cause of failure for desktop Linux has been the GPL and the ideology that goes with it. Linux has been a middle finger to proprietary hardware and software companies and that is not how you build alliances against powerful corporations.
Now go tell your bearded hippie leader how awesome he is for declaring war against proprietary companies. Make sure you don’t include any images since he only uses a text based internet.
All hail Stallman and the people’s 1% operating system.
I was trying to find a good piece to quote that sums it up, but I think you really need to read http://www.gnu.org/philosophy/free-software-for-freedom.html and http://www.gnu.org/gnu/why-gnu-linux.html to understand where they are coming from.
The reason for the existence of the FSF is to promote the GPL. The reason for the GPL is to eliminate proprietary software, because proprietary software is morally wrong. That means the goals are completely ideological, with some pleasant practical side effects.
Open Source is there for completely practical reasons, to promote sharing of work for common infrastructure, instead of everyone reimplementing the wheel privately over and over again. It doesn’t really make any moral judgments, in fact, if you read ESRs “The Magic Cauldron” essay, he actually says there are certain things that really should be and stay proprietary for a business to make sense.
Now, there is some overlap between the two groups, because at the end of the day, while the motivations are different, the ideas in actually writing software are the same. The problem with linux is that because it is by far the most visible FOSS project out there, Linus is so far on the open source side of things, that its success isn’t really achieving the goal of Free Software, which is ideology.
99% of the time, this stuff doesn’t really come up anywhere (unless you listen to talks by RMS), but the linux project is one of those places. Linus has said before he doesn’t want to limit ways people can use the kernel, all he wants that if they piggyback off of his project, that they are required to share any modifications they do. Thats why he doesn’t have a problem with kernel level DRM (for example). Thats fine from an OSS point of view, but from a FSF point of view that is downright heresy. If they had a 100% FSF driven OS out there, they could actually stop things like that, or things like binary drivers in the kernel, or things like tivo requiring signed kernels to run on their device.
Agreed – no profit-making company would allow a project to drag on like this, 20 years with nothing to show for it. Under a real “corporate attitude”, it would have had a year or two to demonstrate feasibility, and having failed to do so, been killed off promptly.
I suppose if they have started from scratch back then, they might already have come up with a working stable kernel release
Nice statement in the article “Unending search for the perfect kernel”
This is one of the best ways I’ve heard it put about why GNU/Linux was such a good match, and why FOSS continues to grow exponentially to this day:
(As a side note, here’s an idea for an image evoked by that quote, an image I wish I could draw: a huge and horrible hacked together machine, yet functional, with FOSS written on it and people all over it, hammering and working on it, and a guy filling the tank from a canister labeled “Fun”. Linus really nailed it. FOSS has and will always have fun as a core design principle and will be powered by it.)
Back to the topic: In 1997, Eric S. Raymond said: “Release early, release often. And listen to your customers.” Linus did that (instinctively, years before ESR said it.). The result is above. The FSF did not.
They had a chance, with BSD 4.4-Lite, but didn’t take it (granted, Mach seemed like the best choice at the time; it is only in hindsight that the choice appears wrong). Then again, they tried so many microkernels across the years and none of them worked as well as they’d have hoped. That says something about the microkernel vs monolithic debate; microkernels may be, in theory, more advanced in some aspects, but in real life their developments poses problems which cannot be handled by the HURD team’s limited resources.
And so it has come to pass that HURD is nowadays a classical example of a software project that always chases moving targets. “Perfect design”? There’s no such thing.
Research is fine and all, but at some point you have to come out of the lab with something that (1) works and (2) is needed. HURD doesn’t fulfill either of these requirements.
Edited 2010-07-01 14:04 UTC
We shall see what is needed in the end.
Minix 3 exists as the living proof of the contrary.
It’s developed by a small team, it’s a microkernel (heck, it’s one of those which take the microkernel approach in the most extreme way), yet it actually works on a lot of computers and can already run a nice pack of UNIX software.
Microkernel programming is really just a way of thinking, although you can make this way as twisted as you like. In my opinion, the “all process are created equal” rule must be respected, which means that scheduler, clock driver, vmem manager, and all other services that can instantly put the system to a grinding halt should be left in the kernel… but that’s another story.
Getting back to hurd, I think the only lesson which can take from that is that the Hurd team is incompetent. They are over-ambitious, as if they somehow suffered from second system effect without having ever created a single working product. And like the Windows NT team one of the sole people who applied the notion of feature bloat to kernel design.
Many kernel projects have died before because of uncontrolled development. Copland is another famous example. The major problem with OS design is that it requires extremely good management. Some are up to that, some are not, but it always have been a sore spot for open source software, generally started and made solely by developers who know nothing about that.
Edited 2010-07-01 20:26 UTC
I really wish people would stop giving esr credit for that. He was just the first guy who wrote it down in a public place, and he only really covered one side of it. In 1997, Kent Beck was in the middle of writing Extreme Programming (which takes a more holistic approach to quick release cycles then ESR did), which was itself a codifying of what the smalltalk guys were doing years before that.
While I’m sure Duke Nukem wouldn’t boot on any computer. It got lots more press than GNU Hurd did. My hats off to the skillful coders who have brought it as far as it’s come.
*Salute*
Is that NeXT and Apple seemed to make Mach work. Why is that? Simple. Developing a kernel is frustrating, difficult work (although I suspect for the right person the rewards can be high). If it is your job (as opposed to your hobby), then the expectation that you will grind through and make it work is much, much higher. Maintaining momentum with a hurd of volunteers working part time is much harder than it is when a project has money behind it.
A key point is also that Linux has had significant corporate backing, with paid engineers working full time on various parts since their employers have an itch to scratch. AFAIK, HURD has never enjoyed such an arrangement.
I think the key point is more likely that kernel development follows the “release early, release often” mantra _most_ closely compared to other projects. This is because a semi-working kernel allows people to work the problem out as they run into the stubs. If your kernel simply just does not have enough to run a web browser with, then people cannot even casually use it long enough to bother working on it.
NeXT made mach work, because they had a product to sell based off of it, and because they didn’t have a set of pre-conceived notions of design purity.
But, Linux did not have any corporate backing, until it was already working pretty well on single CPU x86 systems. Hurd has never reached that point.
As others have said, the wild success of Linux was what attracted corporate support, not what created the wild success. And Linux was never perfect, but they did release, and added more and more features. It went from fragile hobby to being a factor in Sun’s demise pretty quickly.
It’s not so simple: the apple’s kernel is XNU which is an hybrid kernel and not a pure microkernel. In fact, it has Mach as core, and a BSD system as only daemon for everything else. So the limitations due to the Mach microkernel (slow IPC) don’t penalize the performances too much.
However in a system like Hurd, where each OS task is handled by a separated daemon, these limitations are really an issue.
Microkernels do not need to be slow. Look at those computer they sell today, which take 1 minute to boot while running on hardware millions of times faster than the ones which could boot in 15 seconds some years ago.
Compared to that, what are a few microseconds spent on IPC worth ?
Edited 2010-07-02 12:42 UTC
Every ms counts if you do some process 1000x or more… as you can see in some tasks you might do yourself during the day .
Sure, but I think we currently waste so much of those precious ms that the overhead of a microkernel would be nothing compared to that.
From RoughlyDrafted
“Once again, just for good measure: Mac OS X is not based on a microkernel architecture, and has never used Mach as a microkernel. Apple’s XNU kernel is larger than many monolithic kernels, and does not suffer from the intractable performance failure the world associates with Mach microkernel research.
Apple has incorporated progress the Mach project made in development of Mach 3.0, but nothing changed: Mac OS X still does not have a microkernel architecture. Its XNU kernel is not implemented as a microkernel. Apple does not use Mach as a microkernel.
XNU incorporates many technologies from Mach which makes it different than traditional fat kernels such as BSD or Linux. The microkernel myth confuses the facts by associating [anything related to Mach] with [the failure of the Mach microkernel project], which sought to remove BSD from Mach. Since Mac OS X’s version of Mach is full of BSD, this false association is rooted in either ignorance or FUD (or both), depending on who is reweaving the myth.
So there you have it: the Mac OS X Microkernel Myth falls apart on the simple discovery that Mac OS X has no microkernel.”
Why? Just give up already. It’s like some quest for the Holy Grail, except in this case, it will never be found. If they would turn it into a research kernel, then things might perk up. But as an OS kernel meant to compete with legit operating systems, it’s never going to happen.
Are you saying we might find the Holy Grail?
In reference to the GNU-Hurd, I’m saying the Hurd people have lost sight of what’s important and will never be able to find the Holy Grail, a usable release of the Hurd kernel. Just a metaphor. A perfect kernel will never exist probably.
So there was a quest for Holy Grail where the Grail *was* found?
The Holy Grail was found by the Knights Percival, Bors and Sir Galahad in the stories of King Arthur. See here:
http://en.wikipedia.org/wiki/Holy_Grail
Incorrect…Arthur and Bedevere were arrested before actually finding it. Galahad died before at the Bridge of Death. Geeze, don’t they teach history where your from!
http://en.wikipedia.org/wiki/Monty_Python_and_the_Holy_Grail
Reason Linux lives is that you can found big companies around Linux and become a multi billionaire. It is not possible to do with FreeBSD or OpenSolaris. Someone owns them.
But with Linux, no one owns the distros. If Linus T would release an official distro, then RedHat and SuSE would die. Everyone would only use Linus distro. Linus could make changes that only benefitted his distros and fought RedHat. You can not run a business on a non Linus distro. Large IT-companies would quickly loose interest in RedHat and SuSE and Ubuntu, etc.
Linux allows you to become rich. It is like gold rush, no one owns the gold. Just grab and sell whatever you can. No one will stop you. It is excellent business idea: someone else develops and does all the hard work, you just bundle and sell it. Easy peasy. Get rich. Anybody can do it. Linux is not owned by someone (FreeBSD and OpenSolaris is owned).
If you try to found a large company around FreeBSD, then you will have problems. Someone else owns it. You would be at the mercy of Sun or Oracle if you tried to capitalize on OpenSolaris. Maybe even enemies.
Look at Blizzard and StarCraft 1. In Korea it is a huge e-sport. The top stars earns millions of USD. Kespa, the Korean E-sport organisation owns and controls everything related to SC1. Broadcasted TV matches, etc. Now, SC2 is arriving. Blizzard wants a piece of the cake, and Kespa has no longer monopoly. Blizzard has cut the contract with Kespa. SC1 was “free” – Blizzard did not care if someone else earned money on SC1. So SC1 could become big. Now SC2 is coming, and Kespa will not have the interest in making SC2 big because they can not earn big money anymore. Kespa has forbidden all SC1 players to play SC2. The creator of SC2, Blizzard, takes all profit.
If Linus T took all profit then everyone would loose interest in Linux. There are lots of free hobby OSes out there. The thing is, Linux can make you rich, no one owns it. Tie your new OS or tech too hard to you, and no one will use it.
The problem is you wont get rich with Linux cause nobody wants it on their desktops.
Almost nobody pay for Desktop OS anyway (mostly pirated or bundled with new hardware), it is not a really good market to target. It is an “ok” support market if you live in a 2 million + city, but otherwise you will go bankrupt. The real market for Linux is enterprise, and most business know that fact really well.
Tell that to the graveyard of Linux businesses like Linspire, Corel Linux and VA Software. Canonical would be in that graveyard as well if it wasn’t funded with slush funds from the tech boom.
The problem with investing in Linux is that your competitors can take your R&D without paying a cent. Novell and Red Hat make money from Linux by selling support, not the actual software. They are not getting rich and Novell in fact has been privately put up for sale.
Red Hat could just as easily sell RedBSD.
What are you talking about? Starcraft became popular because it was a great game. Blizzard still sells it directly from their website.
http://us.blizzard.com/store/browse.xml?f=c:1,f:3
Proprietary software can also make you rich and you don’t have to give out your R&D. Getting rich by software is difficult and even moreso if your competitor can just take your innovative additions and stamp their corporate logo on them.
“they are not getting rich”? The RedHat founders became multi billionaries when RedHat went through IPO. I saw a list of all new dollar multi billionaires just because of Linux a couple of years ago, in a magazine.
Regarding Novell, well, Novell had that old Netware and no new OS. They just embraced Linux and put out a Linux distro which costed very little time and R&D, in comparison to develop a whole new OS
RedBSD is based on FreeBSD. RedHat can not sell FreeBSD as I wrote, earlier.
SC became big because of Korea, where it is bigger than Soccer or Hockey or Baseball. And it became big in Korea, because of Kespa organization. Kespa started tournaments and TV shows and got all income from them. Now Blizzard is trying to get a piece of the cake with SC2, and Kespa will not favour SC2 anymore.
What I am trying to say, is that SC1 was free to earn money from. So Kespa did that. SC2 is not free to earn money from anymore, the creator Blizzard is there. So Kespa loose interest in SC2.
Just like companies loose interest in Linux, if Linus T would be there and get all income. Where the big money is, everyone follows. Espcially if it is free and no one claims ownership.
That is my point. Someone else spends R&D on Linux, and you just grab it and repackage it and sell it. Splendid business idea. Let someone else develop it, and because it is free, you can grab it and sell it. It is like Klondyke. Just grab it and sell it. Noone will stop you.
The reason Linux became big is that you can get rich on founding a company around Linux kernel and create a distro and sell it. Noone would buy another FreeBSD distro or OpenSolaris distro – there are official distros. Why buy a copy, instead of the original?
The founders got rich from the IPO but then a lot of founders in the tech boom. As a company they are not getting rich. Adobe and Rockstar games have better financials.
This issue was actually covered recently in computerworld:
http://www.computerworlduk.com/community/blogs/index.cfm?blogid=14&…
Why couldn’t they sell their own modified version of FreeBSD? Maybe you should read about the basics of open source licenses before commenting here.
So what about the company that invests the R&D? Are they just suckers?
Red Hat invested quite a bit in Linux and then Oracle came along and offered support for RHEL at half price. This actually caused their stock to drop a while back.
What a truly splendid business. A company like Red Hat invests in Linux and a massive corp like Oracle can come along and undermine their profits without improving the software.
But this only proves my point. Oracle CAN do like this with Linux. It would not be possible to do like this with Apple OS X or Windows. This possibility is the sole reason Linux attracts lots of business people: you can easily make money out of Linux.
I am not saying this is good, but I am saying this opportunity draws attention from lots of business men who wants easy money. (Personally, I dont like this at all).
But seriously. Why would a business man NOT sell something that lot of engineers has put a lot of hard work into? It is free! Let someone else (the hard working nerds and geeks that gladly code for peanuts) take the R & D cost. You just brand it as a package and sell it.
Nope, that has nothing to do with your point, which that this applies only to Linux and not to FreeBSD. There was no mention of MacOS X or Windows in your original posts.
However, there’s nothing stopping people from doing for Windows/MacOS X what Oracle did for RHEL. In fact, there are tonnes of businesses out there that provide support for MS/Apple products at lower prices than Microsoft.
How is that any different from any other industry? Just look at the soft drink industry. Sure, when you walk down the aisle you see multiple brands (Coke, Pepsi, PC Cola, Safeway brand, Craigmont Cola, etc). However, if you look closely, you’ll find that most of the “store brands” are just Coke or Pepsi re-branded and sold a lower price, with just a few tweaks to the recipe.
No it doesn’t prove your point. It isn’t splendid business for Red Hat when Oracle can cut into their profits without investing into Linux. Oracle makes their money from OracleDB and just picks up Linux customers from Red Hat as a way of making some extra cash and getting new customers closer to their proprietary database software.
You know what splendid business looks like? Being able to make billions a year in profit from a single software product. That’s what Oracle does and the profit they take from Red Hat is pocket change to them. Now don’t get me wrong, I think Oracle is a good company and undercutting Linux companies is fair game but I see nothing splendid from the point of view of Red Hat. Profitable yes, but not splendid or lucrative.
I don’t think it is easy money. Oracle can only do that because they have an strong corporate reputation and a tie-in with their database software. Novell also offers RHEL support and they certainly aren’t getting rich from it.
Oracle isn’t stealing any business from Red Hat, get real. It is not the same deal and Unbreackable linux customers are not Red Hat potential customers. Oracle linux customers are those who don’t need the Red Hat grade Linux support. They are between CentOS and Red Hat. They will install it for you but they won’t fix the bugs, security issues or implement new features. You could say that Red Hat is stealing from GNU but they don’t. Actually it is Red Hat that is stealing business from Oracle Solaris and that killed Sun. The Red Hat business model is one of the most effective ones. Oracle may profit from the new features implemented by Red Hat but they can’t do that on demand like Red Hat.
Edited 2010-07-03 09:34 UTC
No they are offering the same level of support at half the price in an effort to take Red Hat customers.
To get Oracle support for Red Hat Linux all you have to do is point your Red Hat server to the Oracle network. The switch takes less than a minute.”
http://www.linux-watch.com/news/NS7266264422.html
Looks like you have never faced either Red Hat support or Oracle support. Red Hat support provides the patch that fix the problem overnight. Oracle points out that they can’t fix that problem because it’s an upstream problem but that you can always ask Red Hat to fix it. And indeed they can’t. If they did fix problems they would fork Red Hat and that would cost them money so they don’t want to. They stick to Red Hat. Sorry but that is not the same level of support. Reh Hat does not compete with Oracle linux, they compete with Oracle Solaris. At least Oracle can fix Solaris when customers face problems.
Where do you get the idea that “no one can sell FreeBSD”??
Anyone can sell FreeBSD. No questions asked. In fact, many people do sell it:
* FreeBSD Mall sells FreeBSD CDs and DVDs
* Juniper sells routers with FreeBSD inside
* PC-BSD is built on top of FreeBSD, but it’s still vanilla FreeBSD underneath
* Walnut Creek used to sell FreeBSD CDs
* iX Systems sells laptops, desktops, and servers with FreeBSD pre-installed, along with support for FreeBSD
There are many many more, those are just those I can think of without resorting to google.
Anyone can download the sources for FreeBSD, stick it on a CD, and sell that CD to anyone else. It’s perfectly legal, so long as you leave all the copyright and license notices intact.
You can also download the sources, compile them, stick the binaries on a CD, and sell that CD.
You can also stick the sources/binaries into specific pieces of hardware, and sell those.
It’s all allowed by the BSD license.
This is the same for *ALL* open-source software. So long as the license allows it, you can do whatever you want with the source (and binaries usually), including selling it, or founding a business around it. This is not something restricted to or unique about the Linux kernel sources.
Tell that to all the people using FreeNAS, PC-BSD, pfSense (all “distros” of FreeBSD), or all the embedded developers using picoBSD, nanoBSD, microBSD (variations on FreeBSD) or all those people/businesses using Nexenta / NexentaStore (a “distro” of OpenSolaris).
These are all projects that are fairly popular, going strong, and even making money for people. And none of them are based on Linux.
Edited 2010-07-02 23:44 UTC
So you are saying that you can fork off the code, and close the source and dont give back anything to the community? And further develop on this code and sell your product? It is true?
100% true. It’s one of the benefits (or downsides, depending on your point of view) to the BSD license. So long as you keep the copyright and license notices intact, you can do whatever you want with the source.
What do you mean with this? If I close the source and further develop into another direction – would this not break the license?
So long as you leave the copyright notice and license intact, then, no it doesn’t break the license.
You can close the source, modify it as you see fit, release binaries, even sell it.
I stopped reading here. You have no clue what you’re talking about. Who owns FreeBSD?
Please never post again.
I wrote wrong, I didnt mean that someone owns FreeBSD, but I meant that it is not possible to just claim ownership and do whatever you want with it, for instance close the code. There is one official distro, you can not just found a company and sell it further. Why would anyone buy your copy when there is an official distro, the original?
You must take the raw material and add some value, then you can sell it. Just like Linux kernel and the distros. You can not just sell the raw material as it is.
Wow. Fast judger and drawing hastily conclusions, right? Normally, if someone writes something weird, you ask for a confirmation “are you really serious with this???”, and then you continue discussion. But not you. If someone spells something wrong or posting late at night (as I did, check the time stamp on my post), then “never post here again”. I am glad that you are not a moderator.
I think that is precisely what Apple did with Mac OS X.
I agree, asking for someone to never post again is totally uncivilized. The guy has no manner. He is probably very young but that does not excuse such a language.
Edited 2010-07-02 15:34 UTC
Did you see where I said “please”? That’s an indication of *good* manners. It certainly is far superior to the kinds of vitriolic name calling I *wanted* to engage in when I first read the OPs post.
This kind of anti-youth bigotry is something that would be amusing if it isn’t so frightening (and so wide-spread). “I don’t like it, so it must be caused by the poster being too young to know better. One day he will grow up and be just like me!” — grow up yourself. I didn’t suggest the the OP was too young to know what he was talking about, because one’s age is usually irrelevant.
Comparing ages is just a pissing contest. Would you like to compare other parts of our anatomies too? What would it prove if yours really is longer? It wouldn’t make the OP any less wrong.
I know! Let’s compare user IDs. Mine is lower and that means I’m older and you’re younger, and thus you’re automatically wrong. If you think that’s an unfair way to dismiss what you say, then we agree on that at least.
If you had made only a typo or one factual error I’d simply post a correction or clarification. You had numerous wholly fallacious assertions which are either mistakes indicating a high level of cluelessness, in which case you should really not be talking on the subject, or deliberate flamebait. Either way I don’t care to attempt a point by point rebuttal.
You wouldn’t be any worse off if I were. I don’t let idiocy color my fair application of the rules. You have not broken any rules. I didn’t even vote your comment down, I replied instead. I think that’s about as fair as you could hope to expect.
Take a few minutes to read about BSD license. So, next time, you’ll think twice before start spreading BS.
Tell that to Juniper, who took FreeBSD 4.x, tweaked it, added to it, rename it JunOS, and stuck it into their routers as an alternative to Cicso routers. No customers see that source code. (Note: Juniper has fed back a lot of patches to the FreeBSD source tree.)
Tell that to IronPort, who took FreeBSD, tweaked it, added to it, and stuck it into their e-mail scanning gateways. Customers never see that source code. (Don’t know if they’ve reciprocated with patches.)
Lots of people buy Juniper routers or IronPort boxes instead of buying hardware themselves and fiddling with FreeBSD on them. Why buy “the copy”? Because there’s a lot of value-add.
However, there’s also lots of people who just buy off-the-shelf hardware, stick FreeBSD on it, and tweak it themselves to save money (Juniper/IronPort boxes aren’t cheap).
IOW, there’s a market for both. Claiming it’s impossible is just naive.
See above.
So what you’re saying is that they’re actually exactly the same since you need to add value to both BSD and Linux.
Nobody owns FreeBSD. The rest of your post is nonsense based on that fallacy.
… well technically, there is a FreeBSD foundation which technically “own” Its trademark etc.
The Hurd is not competing with Linux. There is Debian/Hurd and it works even if it is not complete. I believe it is a fun kernel to work with. There are many stupid comments here from people who don’t know what they are talking about.
Linux is the finished kernel. Even The Hurd developers agree. There is no point in doing linux again.
The Open Source vs Free Software debate has been done to death. Repeating those arguments over and over again is pointless. Everybody know them already.
I wonder how many of the posters here that write their wisdom actually installed and used Hurd?
I have it on “real hardware” and it works better one can think.
My ideas to improve it?
1. stop chasing microkerneles, stick to Mach
2. enhance Mach again so that it can be used for what it is strong for: for example multi core, multi-cpu. Mach works, GNU Mach needs some “love”
3. improve the drivers, lifting them off from linux if possible. One formally needs to chase the components that work with it. Support for USB….
4. Improve performance in some areas (disk mainly, VM allocation)
5. exploit the microkernel to have it running on other architectures. Older Mach did run on 68k (NeXT) or PPC (MkLinux)
I have almost all GNUstep applications running on it, it is pretty nifty.
I generally agree. Chasing ever-better kernels is a clear mistake. Whatever works best *right now* should be used, and to hell with the negatives. Once you have a fairly complete system you can revisit the question of whether replacing the kernel is worthwhile. If it still sounds good then, do it.
Revival of the Mach kernel? I like it.
If they want to revive Mach why not port open source bits of apple to HURD?
and has someone competent at the top who actually cares about finishing the project.
HURD should be taken out back and shot. Deadlock can happen even with projects that have highly competent developers. It happens, admit it and move on.
nuff said
Welcome to the largest online personals site dedicated to
___ http://www.MatchSugar.com _____
is for sincere singles of all races,
seriously looking for hot young girl and rich men seeking a higher
caliber online datingexperience.Join now to meet your dream date in
this.
I’m sure that both parties eventually iterate to the same solution, so why not work togheter. What is good is good. And btw, did anyone think of this: To recuce jitter, which is so important for media/games, why not introduce the concept of “minimal cpu tasks”. That means tasks, that does not require immediate response, are run as a limited resource task, which in turn means that multitasking, when these tasks are running, is completely transparent. And a programming rule of, if you can make it a minimal cpu task, do so. I’m sure that would get rid of a whole lot of stalls, and buffer gaps, that is (still) bothering various operating systems, to a (lesser) degree. (these days.).
Peace Be With You.