This article is targeted at embedded engineers who are familiar with microcontrollers but not with microprocessors or Linux, so I wanted to put together something with a quick primer on why you’d want to run embedded Linux, a broad overview of what’s involved in designing around application processors, and then a dive into some specific parts you should check out — and others you should avoid — for entry-level embedded Linux systems.
This is some seriously detailed writing, and an amazing starting point for people interested in developing for embedded Linux.
Very interesting article. Basic idea is by switching to Linux for your embedded project you get all the usual software stacks and can hire non embedded developers. The challenge is massive (in a micro sense) power draw, boot and response times. No mention of QNX, https://en.wikipedia.org/wiki/QNX , which would be a better tool by far, but I guess is not free. I bet Minx would be superior as well https://www.minix3.org/ .
Iapx432,
I’m sure QNX is better than linux when it comes to real-time constraints, but I don’t like that QNX is proprietary…
I’ve built a few embedded systems using both microcontrollers and linux SBCs. I honestly don’t feel like I need much from linux besides one glaring exception: a networking stack. For everything else programing microcontrollers on bare metal is quite pleasant, fun, useful. I rather enjoy bare metal programming with no need for costly abstractions. Simple microcontrollers use a fraction of power, have more GPIO options, better timings, etc. However I have yet to find a microcontroller networking solution that isn’t cumbersome. I find SSH to be extremely useful for embedded systems and IMHO that’s the killer feature linux brings to this domain. If I could find a microcontroller with SSH, it would be my microcontroller of choice 🙂
I wasn’t aware that minux ran on beagebone SBCs. That’s pretty cool although still much more expensive than RPIs I would use in a similar circumstance.
https://wiki.minix3.org/doku.php?id=www:documentation:read-more
QNX is cool but expensive, and so it becomes hard to talk about porting and utility unless you need what it offers and you (or your client is willing to pay). Minix 3 is really cool, but it just hasn’t been ported to as many architectures. Sure it works on arm, but are there the Allwinner GPIO, I2C, SPI, etc drivers readily available?
Better is an abstract term. For me Linux is better because it is free (in multiple senses), widely supported, and has a community I can turn to for help. I want to get stuff done, and get my OS customized and up and running as quickly as I can.
I’d say that (for embedded systems – e.g. where software and hardware are part of a final product, like a washing machine or car or smartTV or ..); “better” would/should be defined as “lowest cost of final product”. E.g. you don’t want to use a bloated pile of abstractions designed for a large server if it means you need to use a more powerful CPU and more RAM that’s going to cost you $2 million extra to produce 1 million internet enabled garage door openers. In other words; Linux does have a very real cost that does make it “less better” for embedded systems.
With that in mind; I’d want to be programming on bare metal, but with static libraries (containing drivers, network stack, etc) to reduce developer time; with whole program optimization enabled for production releases (so that the resulting “firmware blob” is as optimized as possible).
The real problem is that there isn’t one standard supported by all (microprocessor) vendors; so availability and quality of “bare metal dev. kit” is too varied.
I am guessing you don’t do much commercial embedded work on higher end hardware (where higher end would be the envy of 1992 lol)?
What you are describing is basically writing — or assembling from parts — a scratch OS for every job you do; and if you need to change hardware, or support more types of devices you have to start over on porting.
Instead when I use embedded Linux, or Zephyr, or VxWorks, or FreeRTOS (to name a few that work at different levels for different kinds of targets), I — or a seperate software team depending on the job — can develop to a known set of APIs, build and test while hardware development is taking place, and need to do minimal porting if the hardware platform changes.
A washing machine might not need embedded linux, but that is why there are plenty of lower level OSs out there if you need some form of multitasking, if you don’t then bare metal is probably fine. But that SmartTV needs all the advantages of a modern OS, because it is in every respect a computer and using lower end hardware just to save on the OS may not provide any real cost savings.
Pennies matter, sure, but you have to balance that against development time. Saving even a week in dev time can make beefier hardware worth it. You know like getting the 400MHz 64MB CPU as oppposed to the 300MHz 16MB CPU when the price differential was under a dollar (a real world example)
jockm,
Be nice! Brenden’s a smart guy and his points were valid for 1992 and will still be valid for 2032.
Bare metal programming isn’t that bad. When it comes to GPIO, I2C, SPI, etc drivers can be extraneous because the hardware is so simple anyways and the linux drivers abstractions don’t really save any development time at all. Have you looked at the libraries available for microcontrollers? There’s tons of library support for SPI/I2C peripherals and it’s not like it’s hard to do. If anything, linux is more clunky and difficult because you have more boilerplate code to open up file handles and issue weird IO controls. You could create userspace libraries to simplify talking to the kernel, but what have you really gained over a similar library running on a microcontroller? GPIO is a microcontroller’s raison d’être and for many of us it’s already as simple as it can get. Linux layers can just get in the way.
If you are looking at complicated devices like webcams and network stacks, then sure those can tip the scale in favor of real CPU with more power, USB, etc…but linux is marginal at GPIO, I2C, SPI. In fact it’s often suggested to physically connect a microcontroller to a linux computer in order to handle the hardware tasks more robustly because linux incurs poor response times and is bad at PWM control. For medical applications linux might even be a liability in a realtime control loop.
Of course, we all use the tools that we feel work best. Linux (and the others) should help with portability between product vendors, but that expectation doesn’t always line up with reality. To my chagrin, my x86 linux computer doesn’t support my motherboard’s I2C sensor/fan peripherals. On some ARM SBC’s I’ve had a lot of trouble getting the kernel features I needed because I’m stuck running a kernel that lacks those features. Sure it’s supposed to be portable but hardware incompatibilities do happen in practice. I’ve learned that you won’t always know what’s going to work with linux before you get it – microcontrollers are often more predictable in this regard.
Well, that dollar isn’t so cheap when you multiply it by millions of users.
Anyways, you are right that software developers are sometimes taught that poor software performance is excusable due to the availability of better hardware. Unfortunately this often externalizes the costs such that they’re borne elsewhere. In this case developers might save a few days of development costs, but quadrupling the RAM and a 33% faster CPU might require more power in the end product. That’s less battery time, more carbon emissions, etc. While the manager may just have cared about saving his costs, the negative impact to the world could still be greater than the few days that were saved.
Forgive me for being so contradictory with you haha. I’m not saying your points are wrong, only that it’s complicated and there are other viewpoints to consider 🙂
How much developer time do you really think you’re going to save by using an OS (e.g. Linux) instead of some kind of statically linked developer kit (e.g. Monolinux)?
For “bootable applications built on libraries” there’s no reason that the libraries can’t provide multi-threading, won’t provide a known set of APIs, won’t allow you to write software while hardware is being developed, or will increase the hassle of porting.
The real problem is that there isn’t one standard supported by all (microprocessor) vendors; so availability and quality of “bare metal dev. kit” is too varied (which can lead to no multi-threading, unknown API, hassle porting, etc).
@Alfamn and/or @Brendon
I was being nice, or at least didn’t think I was being hostile. But it is really easy to assume how easy something is if you are a hobbiest or outside of the field.
You will notice I did point out when bare metal might be appropriate (like the washing machine), but then said that if you need multitasking — because a time sliced executive loop or interrupt handlers isn’t always the right approach — then there are all of these lightweight multitasking kernels and operating systems to choose from… which really is what he is suggesting when he talks about a bunch of standard libraries.
But when you talk about building a SmartTV — his other example — You are talking about a computer that would have been advanced not that long ago. It does need multitasking and a networking stack and to be portable to move to whatever processor makes sense for the next generation. I may need Python for some of the third party software, and Java for others, etc.
And building that complex suite of software and OS configuration is a lot of work. You can literally shave months of development time by going to a larger/more capable part and for a multi person dev team that money adds up.
In the example I gave we had to add about $0.07 to the cost of the SoC, it helped us save about $0.20 by not needing to include a different component however. The easier development because having more memory, and faster CPU saved us a ton of time (months).
If we had to go bare metal, the dev team would have had to either write a custom emulator, or wait until the hardware team had enough prototypes up and running, which could be many months for something as complex as a SmartTV. All of which is a
I feel like i called out or alluded to all of your major points. But you can invent hypotheticals where I would be wrong, and I pointed out a very real world example I worked on.
Because, yes an OS matters that much when you are building something that would have been a Unix Workstation in 15 years ago
And as for your motherboard’s I2C fan sensor sensor, I am not sure that is a great example. Can’t you can write a driver? Isn’t that the point? If it were bare metal outside of your control, or a closed operating system you couldn’t. You could then carry that patched kernel forward with you.
Because that is what you — or at least I — do when you do embedded development, so it is portable.
@Brendon I am not saying your approach doesn’t have a place, I will point out that monolinux isn’t even complete (no networking) on the pi3, and doesn’t seem to have any other real world ports… whereas I can get my current embedded code up and running on any embedded linux system that has an ARM9 400Mhz or better, i2C, SPI, and about 6 GPIOs.
I download the kernel, build it, download buildroot use my standard config, and I have a working system.
I think you need to show me an example of a real world embedded linux project that could run in the kind of setup you describe on a CPU that costs say $1 less. Otherwise it is all hypotheticals and we can spin those all day long.
jockm,
It depends on what the requirements are. I’m not always averse to using a fat kernel like linux, but there are times when it’s not only more complex & difficult, but it’s actually inferior. I get that that some linux components can be very useful, the network stack and USB peripherals are big ones that I already mentioned. Being a multiprocess/multiuser system may or may not be important…these things depend entirely on the project. For a smart tv I would be inclined to use something running a real OS.
You said “a real world example”, but you’ve only provided your conclusion without any way for us to independently mull over the merits of your conclusion. Maybe I would agree, or maybe I wouldn’t… I understand and respect that you may not want to divulge information, but at the same time the example doesn’t carry much weight without details to cross examine.
So I can’t say much here other than offer a generic opinion: sometimes more powerful hardware makes sense, but sometimes more optimization is worthwhile too.
It may not be directly related to embedded computing, but it does speak to the point that linux abstractions aren’t going to save any time unless the linux kernel & driver you are using is compatible with the hardware you’ve got. So yeah we can agree that the abstractions are supposed to be portable, but it doesn’t always work out that way. For this reason it is wise to test development work on the actual machine that will be used early on in the development process and not make assumptions that it will work.
Some linux abstractions are more robust than others. For example I’d have pretty high confidence that a web server would run on nearly any linux capable hardware where networking is working. But I have less confidence when it comes to platform drivers. I’m not sure about you but I’ve seen my share of things that didn’t work as they’re supposed to.
Anyways I do use linux embedded projects sometimes. Next time we should discuss some actual projects 🙂
You were implying that libraries can’t provide multi-threading, won’t provide a known set of APIs, won’t allow you to write software while hardware is being developed, or will increase the hassle of porting. I mentioned Monolinux as a way to make it obvious that the API can be identical, the scheduler can be identical, that portability can be identical, etc.
Sure; Monolinux may not be complete and may not support every piece of hardware in the world; but that’s just another way of saying “The real problem is that there isn’t one standard supported by all (microprocessor) vendors; so availability and quality of “bare metal dev. kit” is too varied.”.
I’m not very familiar with the state-of-the-art in current SmartTVs. When I threw SmartTV on the list of examples I was thinking about my SmartTV; which can play media (movies, music, etc) from USB flash, and has an ethernet socket (for an internet connection) that has never had anything plugged into it; but has no web browser, no ability to run any third-party software, etc.
One you start needing to run web apps, or Java, or third party software; you’re no longer talking about “embedded”; so (in hindsight) “smartTV” was confusing because it means 2 very different things (embedded systems like my SmartTV, and more advanced things that are not embedded systems).
Of course maybe the problem here is that we’re using different definitions of “embedded system”. For clarity; my definition is the same as Wikipedia’s ( https://en.wikipedia.org/wiki/Embedded_system ); specifically, the “has a dedicated function within a larger mechanical or electrical system” part. Note that “dedicated function” implies “no third party software” (because that would not be a dedicated function), and therefore no need for security, no need for processes, and no need to support languages like Javascript or Python or Java.
I agree about the authors intent, the direction. And I know by modern standards a 1GB is not much, yet it’s about the bare minimum you need to run a Linux based system, but by micro-controller standards it’s huge and “expensive” on so many levels.
Alfman, No doubt networking stacks are the big issue, especially for battery based devices.
Of course I realise this article isn’t trying to paint itself as “low energy”, at least not as “low energy” by MCU standards, in fact the author makes a point of highlighting that which I thank him for.
yes, I usually use t2 embedded linux for that 😉 https://t2sde.org
@rene, yes it was actually Rock Linux that first caught my eye for embedded systems.