Rux‘s goal is to become a safe general-purpose microkernel. It tries to take advantage of Rust’s memory model – ownership and lifetime. While the kernel will be small, unsafe code should be kept minimal. This makes updating functionalities of the kernel hassle-free.
Rux uses a design that is similar to seL4. While there won’t be formal verification in the short term, it tries to address some design issues of seL4, for example, capability allocation.
The code is very approachable for anyone interested in capability-based microkernel design.
I don’t know what’s up with the coverage of rust operating systems, but good At this pace I don’t know they’ll be enough to last though 2017 though, haha. I’d like to keep the discussion going…
The language of choice for many hobby OS developments traditionally has been C/C++ by convention, which is understandable given the defacto status it has, but over the decades these tend to fall into the “me too” category. So I’m encouraged that more hobby OS devs are opting to break convention and go with rust, or really anything with safe-by-default semantics. Java and other managed languages offered that, but IMHO they were always held back by undesirable runtime tradeoffs.
These new breed of operating systems could finally crack the plateau of security we’ve been stuck at with complex C based operating systems. What’s not clear is whether they can ever catch up in an economic race to the front that started 25-35 years ago. These entries are so far behind now that it’s hard to envision them ever becoming mainstream.
I’ve lost faith that change can be driven by the users who tend to play with these operating systems. Still, in theory one of these could eventually catch the eye of a multinational company, say sony, and in an attempt to beef up security for the playstation, which keeps getting hacked, they might eventually invest in more secure operating systems. It could be enough to slowly kick-start a commercial rustlang economy.
…Discuss!
Rust could be what Ada once was, only with the distinction of preexisting hobbyist buy-in. It’s also at the heart of Servo, Mozilla’s new browser engine to replace Gecko. Chuck in a C library and some POSIX compatibility for Unix software and you could have Redox become this decade’s BeOS or OS X, an attempt to marry the best of unix to a more modern system.
I think Ada still is. It’s just not “sexy”.
Ada is old and crusty so it only gets used for serious stuff and stuff that needs to be supported whatever the cost.
Rust is new and part of the culture of creating something from scratch every time design mistakes catch up to them but they don’t want to do the real engineering work when it does, opting to repeat other people’s mistakes by starting over again. But then, you did say “hobbyist” already
Hobbyist buy-in is like an oxymoron.
Without the hobbyists you have no developer pool to draw from… thus Ada is developed at “whatever the cost” expenses.
It’s the psychological profile of the hobbyists that matter. A language like Ada that has relatively few hobbyists because it isn’t sexy is not that much different from a language fully of fad-hopping hobbyists who will abandon ship when they realize the language they thought solved their pet peeve has developed something else to peeve them off in its stead.
kwan_e,
Yes and no. It’s really not like new languages are developed in a vacuum, they clearly benefit from experience with languages that preceded them. In other words, today’s developers have the benefit of hindsight and can consciously fix many of the issues where C/C++ are criticized, such as cruft, safety and bad compilation times with large projects. So we shouldn’t be making the same mistakes that have been made in the past.
But the possibility still exists that we are making new mistakes with new languages, and I think those would be well worth talking about. Do you have anything specific in mind?
Edited 2017-01-10 16:11 UTC
It’s the way they go about fixing the issues – “oh, we’ll just create yet another language” – that is the problem. All languages will develop cruft, and all languages will paint themselves into a corner. The most ideological ones tend to do that the most frequently.
Only a handful of languages are left where the base is committed to using it cruft and all and not just abandon it for the latest fad in language design.
kwan_e,
I realize you are a C/C++ fan, and there was a time I was too. However many problems with C/C++ are very widely recognized and understood, consequently some people look to new languages to solve them. The big players are motivated to work on new languages like D, swift, rust, go, etc, not because of fads, but because of systemic recurring problems they have experienced with their existing codebases.
This video on the topic of new and old systems programming languages is very informative, partly because Bjarne Stroustrup (the creator of C++) is there along with others, and he personally acknowledges the criticism against C++ being made by the employees of google and facebook in their build processes. I personally admire the way they are so friendly with each other with no animosity at all. Stroustrup even humorously takes it in stride “the reason I take this mic back is to say that I don’t disagree with that.” to everyone’s laughter.
LangNext 2014 C++, Rust, D, Go 1HR discussion
https://www.youtube.com/watch?v=BBbv1ej0fFo
You like C/C++, then great. Maybe D-lang would be up your alley too then? Maybe we can discuss that? I enjoy taking about all languages, however in past discussions you seemed to take serious offense to any possibility that another language could have a single benefit over C++.
http://www.osnews.com/comments/29463
My goal is to foster a friendly discussion here, but to be honest, I don’t know how to deal with you, haha. So I ask you thusly: what can I do so that we can have a friendly discussion even when rust comes up and we disagree?
Edited 2017-01-10 18:24 UTC
My point in this discussion is not about Rust vs C++. It’s about what it takes for a language to become widespread enough to become as embedded as C and C++.
Languages will get big, bloated, and accrue cruft. No language is exempt. Until a language designer and community grows up and accepts this, and is willing to support the language with all its warts like Stroustrup does with C++ and the Ada guys with Ada (you could even say the same about HTML and Javascript, in a perverse way ), their language is a fad language.
Maybe, and let’s hope, Rust’s designers and its community are mature enough to stick with Rust regardless of how much garbage it accrues over the years. If not, it will not get the hypercritical user mass to prevent the next GoSwiftRust from taking over and for the whole “we just need to get critical mass” game to start again, as it steals users from RustGoSwift and they lose their momentum. At least Rust does have a benefit in that it seems to be serious with zero cost abstractions. So at least it has a better chance of not having to pay for design mistakes. But Rust won’t magically avoid paying for syntactical and semantical cruft.
kwan_e,
But why? Tech gets ditched all the time once it’s outlived it’s usefulness, and not just software.
In the video Stroustroup says “C++ is obviously too large.”
The facebook engineer says of C++’s compilation efficiency problems “I think it’s a problem intrinsic to c++. C lang was designed with batch multi-stage processing in mind, and it shows. All these companies share the same problems and that have a huge c++ code base. Have a huge army to babysit the build system, at the end of the day it was an unsolvable problem. No amount of money can make c++ compile fast.”
The google engineer says “we measured data went to the compiler. 4.2million factor blow up when using class per header file.”
C++ will remain useful to many people for a long time, that’s fine. But there are many companies and individuals who are frustrated with it and I find it very hard to make a case that they should not embracing new languages. For many of us, languages that carry the burden of legacy cruft is not a good option. There’s no reason different people with different programming language opinions can’t co-exist, just look at mainframers. Diversity is the spice of life
Then by that reasoning, there is no such thing as a good option. All languages will develop legacy cruft, and new languages will reinvent old cruft. I have no problems with diversity in programming languages. And while you seemed to want to continue with discussing C++, I’ll have you note my first few comments did not mention C++ at all.
What you don’t seem to be acknowledging is that Rust has several things going for it that other languages didn’t.
First, a brief overview of some of the languages I remember you mentioning, to point out the especially noteworthy problems they had:
1. Ada lost out to C because, for far too long during the critical time period, it was much easier and cheaper to get an acceptable-quality C compiler and learning resources.
2. D lost its opportunity because it misjudged the role of GC in the direction things were going, Sun Microsystems threw a ton of marketing money at Java, and it’s STILL recovering from that “originally proprietary, resulting in a bifurcated community” issue.
3. Node.JS is more or less the definition of a fad language/environment/framework. Aside from letting client-side JavaScript developers try their hand a server-side coding without learning a new language, its strengths (an architecture and libraries/frameworks for lightweight persistent connections and betting on “NoSQL” in the ecosystem) are reasonably easy to retrofit to other languages while it still has the same major weaknesses (eg. no mature Rails/Django-level frameworks with mature SQL ORMs, schema migration, communities, etc.) even after Rails and Django have more or less caught up. Is it any wonder that polyglot programmers started to bleed back to platforms with the more comprehensive ecosystems? (It didn’t help that, in many prime use cases, such as document-oriented JSON storage and querying, SQL databases like PostgreSQL also overtook the “NoSQL” databases at their own game.)
3. Outside of Japanese products like RPG Maker, Ruby never really outgrew the niche it captured by being first to market with Rails… but it’s still holding strong with Rails. (I could never get over Ruby’s warts in the first place, but I’ll readily acknowledge that.)
I’ll acknowledge that Go and Swift might be fad languages, given that they feel like the “Web 2.0” of the Java and C# world, but Rust actually has a long list of noteworthy advantages that, taken together in a single package, make it quite unique and compelling:
1. Cargo provides a build system and package manager that matches or exceeds every other mainstream language’s offering.
2. The community so far is welcoming, engaged, and eager to extend the design principles of the core language into the libraries. (To the point where RFCs for language changes now require a “How do we teach this?” section.)
3. It’s got a lot of good, free documentation and (unlike with Swift) there’s been a strong interest in providing a first-class developer experience across all-major platforms. (Yes, IDE support and compile times need work, but they’re in progress… it’s just that they were postponed because 1.0 was all about getting things right that can’t be changed later.)
4. The combination of the type system and the compile-time ownership/borrowing handling enables compile-time enforcement of some very appealing design patterns, such as correct use of state machines, locks you can’t forget to take/release, etc.
5. I think some of the people in /r/rust/ said it best when they said that Rust should really be marketing its strengths as “fearless concurrency” and “fearless refactoring”. Don’t underestimate what a load it takes off your mind when the compiler doesn’t have to let its guard down because legacy C++ exists.
6. Variables are const by default. Type inference is present without any special keywords, but intentionally stops at “function signatures are your ABI boundaries, so they must be explicit” to prevent inference explosions. Again, giving you the strength of properly-used C++ but with a feel closer to a dynamically-typed language.
7. Like C++, it focuses on zero-cost abstraction. Never underestimate what a mind-blowingly appealing thing this is to people used to coding in languages like Ruby, Python, and JavaScript.
8. A powerful, hygienic macro system which operates on the AST so it can give you more correctness guarantees than C or C++.
9. The community is already building impressively comfortable bindings for extending or embedding runtims CPython, CRuby, Node.js, Lua, etc.
10. It has a trait-based object system that acknowledges that nouns (structs) and verbs (traits) are separate top-level concepts which interact in defined ways, rather than verbs being pieces of nouns (classes).
11. Monadic error handling and greppable panic macros like `unimplemented!` and `unreachable!` like, so it’s easy to ensure you haven’t left something unhandled. (And there are rumblings about `panic!` itself and how to minimize it even further in situations where you really do want to handle things like malloc failure.)
12. Cargo and Rustup have first-class support for cross-compiling (I targeted my OpenPandora handheld with two simple commands to pull in the cross-tools and two lines in the config file to point it at the GCC I’d unpacked for the final link against the handheld’s glibc.) and the community is developing further tools like Xargo to streamline defining and building entirely new targets for things like microcontrollers.
13. Aside from not yet having an equivalent to `#include <foreign_thing.h>`, Rust’s FFI story for C already matches C++’s and they have plans to make it better.
14. A runtime about as thin as C’s, which you can opt out of even more of… making it especially suited to embedding into things which have their own GC or running on constrained hardware.
…and I’m sure I’m forgetting some of them.
Yes, languages do accumulate cruft, but Rust is such an unusual blend of convenience, performance, safety, and power that it stands out from the crowd and, even when it does get crufty, I don’t see myself giving up on it. From my lurking in in the issue trackers, developer discussions, and /r/rust/, it’s clear that every design decision has a lot more thought put into it than you’d assume at first glance and I seriously doubt I’ll see another language one-up it in its areas of strength any time soon.
Hell, in addition to the “fearless …” descriptions I’ve seen before, one of the comments I saw on /r/rust/ that was telling was along the lines of “maybe the mainstream doesn’t really need to choke down functional programming’s alienness to get the kind of compile-time checking and concurrency-friendliness that everyone is clamouring for. Rust seems to be doing a very impressive job of offering those in a more familiar paradigm.”
Now, yes, I will admit that Rust’s ownership and borrow checking can be obnoxious at times… but many situations where a valid complaint rather than an eventual egg-on-your-face “Oh, that really wasn’t sound after all” moment are areas where, again, they postponed the fix to work on getting as many breaking fixes as possible in before 1.0.
…and, even with that said, I still think it’s more desirable than the kind of uncertainty I have to deal with when developing or refactoring codebases in other languages.
Rust is simply a very uncommon blend of good ideas, merged in an uncommonly seamless way.
Edited 2017-01-11 08:06 UTC
kwan_e,
I think Rustlang in general has a good shot at growing among independent developers like me. But as I indicated in the first post (in the context of operating systems), critical mass may not be possible without larger guys picking it up too. Since I don’t have a crystal ball, I’ll spare you my speculation
Edited 2017-01-11 09:24 UTC
I don’t disagree Rust would grow (has grown) among independent developers. But those developers jumped on the Rust bandwagon because of the reasons I keep bringing up, and they’ll leave when those reasons begin to apply to Rust. So no one would want to hitch their wagon to that unless Rust shows no signs of abandoning cruft that’s no longer fashionable, and those developers don’t leave the language.
kwan_e,
You put a lot of weight in the future risks of Rust becoming crufty, but I’m not sure you really appreciate how much frustration C++ causes some of us today and I feel that Rust is building from a stronger foundation than C/C++ had.
If you like Ada more, that’s fine too… I’m just trying to say that rational people can have their own individual preferences, agreed?
Like I keep saying, I was never about arguing about Rust vs C++ or Rust vs Ada. I’ve always been talking about, from my first three comments on this article, the difference between this current generation of languages and their reason for existence, compared to the established languages, and what would stop a language from becoming entrenched.
Forget languages for a second. Think about libraries. jQuery was dominant for a while. But then there’s Angular – two incompatible versions of it. There’s React. There’s Dojo. They’re continually popping up claiming to do better than before yet none of them become entrenched. People get fed up and keep going on to make their own and make the space incredibly noisy and nothing useful gets done.
Kind of like the French Resistance: https://www.youtube.com/watch?v=YO-Ocueehfc
kwan_e,
Well that’s a fundamental difference between you and me then: you seem to think I would like rustlang to become entrenched the way C/C++ is now, but I don’t at all. Only by being more diverse and less entrenched can we help developers choose programming languages for their merits rather than because of market peer pressure. For any language to be so entrenched in the market that we become hostages to it is bad.
In a perfect world, I’d agree with you. But this position is at odds with wanting a rustlang economy, or any economy built upon the whims of fad chasers.
kwan_e,
The fad chasers will do what the fad chasers always do, who says that’s a problem? They really aren’t new at all. Known by less condescending terms ‘trendsetter’ and ‘early adopters’, you’ll find them chasing all kinds of products like electric cars, smart phones and TVs to children toys and even diets. You can’t focus too heavily on this single group, even if you find them to be fickle, because they only represent a small part the overall market and they actually play a positive role in new product discovery for the rest of the market.
https://en.wikipedia.org/wiki/Early_adopter
I’m not saying the path is easy for rust in a crowded field, far from it. But it’s a good sign that it’s gaining positive traction from early adopters and with any luck the early majority will be impressed too.
kwan_e,
Since we have to wrap it up, I’ll try to end with a generic and unbiased point I’m hoping we can both agree on…
Automatic compile time verification of dangerous programming operations is a new concept for the vast majority of software developers. And regardless of language, it’s a potential game changer for secure systems programming. Given that it’s in it’s nascency, the only direction for this compiler verification paradigm to go is up! Therefor any language that can demonstrate these capabilities effectively should be well positioned to take advantage of growth opportunities as the demand for compile-time safe programming grows.
Edited 2017-01-12 15:06 UTC
I’ll definitely give you that, but another reason you’re not mentioning is that some of us (myself included) stick to jQuery because we’d rather reinvent data binding than write a website that falls to bits the moment someone installs NoScript or forgets to support JavaScript processing in their search engine.
As Christian Heilmann has blogged about on various occasions, reinventing half your browser’s rendering flow in JavaScript makes for a very fragile design and you end up constantly discovering new behaviours you didn’t realize you’d broken (often in the sphere of accessibility).
Then you’ve missed what I’m saying. I’m not saying the past was better and today is shit.
I’m saying the nature of today’s new languages is one of constant reimplementation with lots of breakages. Whether or not you think that’s good or bad, it cannot be denied such changes do not make for a good base for any huge investment, especially in the order of creating a safe OS that could replace the current ones.
While Rust may be more serious about maintaining backwards compatibility – it is only at version 1. People change.
You briefly mentioned Pascal earlier.
Would you consider Wirth languages as carrying much cruft? Or didn’t he throw out enough stuff upon each revision? Were the ISO standards helpful or not for potential users? Would you consider them (or derivatives) worth using today? Or should someone make (yet another) derivative from his work (roughly speaking, the Algol line)?
Rugxulo,
You know what, I really should know more about Wirth, I don’t know much about the progression of the Algol line outside of the two flavors of Pascal I had experience with: Borland Turbo Pascal and Stony Brook Pascal.
SBP was highly optimized.
https://groups.google.com/forum/#!topic/comp.lang.pascal/48biR0sk1_Q
My knowledge of Delphi is patchy at best, so I don’t know how badly out of date my first hand knowledge is, but I think many of the features that pascal had in the 90s have aged a bit since then.
At the time I thought it was absolutely great that pascal could automatically catch various kinds of IO/overflow/etc errors. This helped make pascal software more robust (ie compared to C). Overall it seems inferior to try/catch exceptions most languages have now, but considering dominant languages still don’t handle numeric overflow, it could still be considered an advantage.
The built in string type was hardcoded as a 256byte array, where the first byte represented length. This allowed some pascal string functions to be somewhat faster than C functions that needed to scan for \0. Also, pascal strings were absolutely safe and this allowed pascal to distance itself from most of C’s notorious string vulnerabilities. Something would have to be done to fix the 255 byte string limitation, which was tolerable with 80×25 console apps, but not really acceptable today. Lots of code relied on the length in the first byte and so it has to be rewritten I imagine. And then there’s unicode…
While pascal underwent an oop transition, much like C, the resulting duality has always bothered me. While the OOP and non-oop can work together, once we start mixing conventions for error handling and method calling, etc, I find the resulting complexity considerably worse than a language that was always OOP to begin with.
In my experience, C++ code relies more often than not on C libraries and/or syscalls, which we will often wrap up with new OOP classes, logic, exceptions, etc. And although this works as expected, it’s a lot more boilerplate work than a language like c# that’s always existed in OOP form.
Pascal lacked namespaces, but I think it made up for it with modules and scoping rules, which I find refreshing and I think that’s one thing that pascal got very right and many other languages got very wrong. Pascal has child procedures, they’re like encapsulation within encapsulation, and I think it’s a good idea.
Pascal pointers are not safe like they are in rust or managed languages.
Pascal lacked modern features like closures and generics.
I suspect some of pascal’s IO primitives were hard coded into the language, and so it’s probably a lot less flexible than stream operations in C++.
Someone with more up to date experience may be able to fill in some of the details. To answer the question: no, I would not use it today as implemented back then. Hypothetically if there were a new derivative that neatly brought it up to date and also offered rust’s safety semantics, then absolutely I would be trying it out!
MOTD: Software is getting slower more rapidly than hardware becomes faster. – Wirth’s law
Edited 2017-01-12 00:17 UTC
Wirth is probably a perfectionist, and he thrives within anemic environments (see Project Oberon 2013). He was right, of course, but it’s not easy keeping things lean. The modern era has little interest, sadly.
Rugxulo,
Every once in a while when this comes up here on osnews and I criticize the bloat in modern software, I get strong push back saying it’s not worth developer time to optimize. This is the same belief held by many IT managers, but sometimes it makes products noticeably worse for end users. I’m thinking of one project where we were upgrading a mainframe application which was lightning quick with a .net web application. The customers began complaining about performance because they had been accustomed to instant screen updates, but the latency of both the .net server and IE rendering were highly noticeable. The client bought a gargantuan multicore server to minimize the server side latency. We couldn’t do anything about IE’s rendering latency on slower end user computers, but we added a progress bar and that apparently reduced the perceived latency. Haha.
Edited 2017-01-13 05:12 UTC
That sort of thing is actually the reason I’ll only write or use web applications when I’m actually dealing with hypertext (or certain network-centric use cases where other factors make a native client undesirable).
For example, TiddlyWiki, RSS readers, etc.
In those cases, native applications just reinvent inferior wheels.
Still, the sooner the Rust ecosystem can grow some QWidget bindings and a full-stack web framework that’s competitive with Django, the happier I’ll be.
(Though I am also looking forward to learning Pascal so I can experiment with FPC’s DOS-targeting support on my retro-PC. I’m sure it’ll be more pleasant than working in C.)
The obvious dialects to learn would be “tp” and “delphi”.
Also, I assume you’re referring to the new (3.0.0) “i8086-msdos” (cross-)target. FPC already had DOS/DPMI (Go32v2) target.
Both, actually. I’m doubtful that Free Vision has retained its early ancestors’ support for being usable in real mode contexts and I’d like to make some TUIs too.
The reason I’m just getting into it now is that it’s been less than a month since all of the little bits of trivia clicked together into “Maybe learning Pascal and using FPC would be a better choice than using OpenWatcom, DJGPP, or Pacific C for targeting DOS, given that it’s got support for safer string types and a more embraced-by-the-compiler-guys-seeming Turbo Vision port.”
Edited 2017-01-14 01:02 UTC
Telling people to upgrade their hardware over software bugs is almost always a bad decision. These days the software lives much shorter than the hardware. Though I don’t really have anything pithy to call it (Dork’s Law?).
To be fair, I’ve used C++ about 1990, and its just getting too long in the tooth. Not enough deprecation in there – largely, I guess, due to the way its module system is based on textual inclusion – so it is hard to change things without breaking compatibility.
Templates are lovely, but end up with almost everything being in a header, so compile times just get worse. Without Concepts, error messages keep getting more complex, frequently surfacing in library code the developer should have no reason to know exist.
I know there are proposals for both modules and concepts, but they seem to be permanently kicked to the next version.
I’ve been experimenting with Rust, and must admit that even in its current state it is just a nicer experience, to me, than C++ development. I wish them every success.
Yes, but at least we know they’re serious about getting it right because we know they’re serious about having to support a language feature once it gets in, even if it turns out to be bad (like std::async or std::bind). But even then, we know whatever their state is at their inclusion, we know it will not cost anyone if they don’t want to use it.
Yes, Rust is a nicer experience. Because it isn’t old enough to develop all those cruft issues. All languages are nicer experiences when they haven’t developed. They will once its user base expands, and either people will stick with it, or more likely, abandon it for another language because that’s the reason they came to Rust in the first place.
Not even God manage to fix anything just by drowning the fuck out of everyone and starting over again. And it’s a good metaphor because language (and OS community) design is pretty much like playing God for that universe. If you’re constantly going to drown everyone and start fresh, nothing will actually grow, and the seeds of cruft will still be there.
Let’s hope Rust doesn’t succumb to that Godly urge.
I think that you may be missing the fact that Rust isn’t written by hobbyists. It’s written and used primarily by professionals with experience in one of the largest C++ code bases on the planet: Firefox. Firefox is big enough that it requires 64-bit build tools to compile on Windows. And it is popular enough that it is a major target for malicious hacking.
These people know exactly what problems they have and what they’re solving.
Being attractive to hobbyists is a nice side benefit.
I don’t see why “professionals with experience” can’t be hobbyists. I’m not talking about skill level.