Embedded systems have started to become extremely complex. The big push to connect every device to the internet to create the IoT is causing a demand for embedded software engineers that has not yet been seen in recent history. This big push is causing a vacuum in which companies can’t find enough embedded software engineers. Instead of training new engineers, they are starting to rely on application developers, who have experience with Windows applications or mobile devices, to develop their real-time embedded software. The problem, of course, is that these engineers don’t understand the low-level hardware, but only high-level application frameworks that do all the work for them.
Is this actually true? It’s very difficult to gauge this, since most focus when it comes to development is on “sexy” development, such as smartphone applications or websites – there’s very little media visibility for lower-level engineering such as embedded developers, kernel engineers, and so on. Since I know how easy it is to fall into the trap of believing that everything was better in the past, I genuinely wonder if this is really actually a problem, or that we just perceive it as such.
Application-level programming is not kernel-level programming.
Kernel-level programming is not application-level programming.
Both types of programming deal with sufficiently different intellectual domains where application-level programming is more about software and very much less about direct-hardware-access (software APIs shield you from many aspects of hardware) while kernel-level programming is more about hardware knowledge.
A kernel-level programmer would find it much easier to become an application-level programmer than vice-versa.
If the “intellectual difficulty” of {kernel, OS}-level development were similar to application-level development than we would have had systems like Haiku-OS and other interesting (hobbyist ?) operating systems being of maturity that would easily rival, in functionality, current day Mac/Windows/Linux/BSD/etc. systems. Acceptance of the OS is another thing.
Why are there only 2 major GPU vendors (NVidia and AMD (formerly ATI)) ? Apart from funds required to be in that sector, the low-level (i.e. hardware-based) technologies associated with GPU-tech represents an intellectual-paradigm far from that paradigm that application-programmers deal with and, rightly so, application-programmers are not meant to be delving too much at the low-level hardware level. Even the “great” Intel using AMD’s Vega cores in that Intel/AMD hybrid CPU also infers some dimension to the difficulty that exists in hardware-related detail.
e.g.
If I were to write 3D program useful for visualising the crystal structure of structured metallic materials this would entail domain-specific knowledge from the realms of metallurgy and physics. I would more likely be a metallurgist, physiscist, chemist, etc. and have decent application-level programming experience in order to develop that respective program for visualisation. I should not be expected to know about low-level hardware features since APIs (libraries) would exist to wrap the relevant hardware detail. My time is better spent on the crystallography field and using application-level programming as a convenient tool, in accordance with my academic/etc. training; there are only 24 hours in a day. However, lower-level (kernel/OS) developers are required to provide the foundation that leads to the formation of the APIs (libraries) that I, as an application-level developer, would require to use in order to access the hardware in a more simplistic high-level approach. The low-level programmer is not meant to have experience in crystallography since that is not the domain that they are dealing with, like low-level hardware is not the domain I am dealing with.
Also, writing device drivers is another issue.
cade,
There used to be more competitors like matrox, 3dfx, etc. I think 3dfx was technically ahead, but they were done in by financial troubles. Competing in the fab business takes money and skills, having one is not enough.
I agree.
I remember NVidia buying out 3dfx IP.
Hard to forget those cards called “Voodoo”.
I remember those 2D-graphics “Number 9” cards in the 1990’s.
3DFX was to the best of my knowledge the first consumer-level card to include 3D acceleration, and quite exciting on that basis, but the Matrox Millenium had better 2D performance than any other card for quite a while – which benefited most day-to-day tasks much more.
Voodoo porn :
https://www.vogons.org/viewtopic.php?f=46&t=59442&start=0
https://www.youtube.com/watch?v=_3iHV0NvLPI
Amazing video, thank you for posting Kochise. I sometimes think my poorly soundproofed PC sounds like it’s going to take off, but it looked like that thing probably could with the air flowing off those fans.
April Fools?
) …though it didn’t mean that much, all cards were quite fast by then. I still had Matrox Millenium G400… (years later, I also got a Voodoo3 to my collection)
There are significantly more players in the GPU space for mobile and embedded systems.
What other architectures than PowerVR are there?
Samsung also makes quite a few embedded graphics chipsets
FIMG like in the S3C6410 ? Come on…
Well, writing software is not only about programming, it is programming + engineering. Developing things with low-level stuff is complicated because developers must focus on both the business logic and the low-level implementation details. In the end, people often find that most of the code are about hardware/system control and the business logic is simply buried in them, which make the code less readable and less reusable.
Then a good/responsible engineer would try to refactor the system by creating a different level of abstractions each of which only deals with a specific level of problem domain. Then some other developers take this as an opportunity, they create different frameworks to free the application developers from the low-level stuff.
But I don’t think application developing is an easier job, or it is easier for a level-low developer to become an application one because they do need to learn a lot of business knowledge. The most difficult part is one must have enough “sense” to design the system in such a way that it is more or less future-proof for the type of the business it serves.
Well, Intel is the largest GPU vendor / most PCs nowadays are laptops using Intel GPUs…
I work in Industrial automation (materials handling)
Out of about 6 “Controls Engineers” I’m the only one that really does electronics, the others are at least one level higher up, writing software for PLCs, HMIs, SQL etc… Desktop applications and so on.
If your application is more industrial you can get away with using an off the shelf PLC etc… and you don’t *need* someone like me to design anything custom for that… that said if you need to embed controls into something that can’t reside in a panel or has very specific requirements that the PLC isn’t capable enough to meet… then that’s where a CE or EE will be advantageous to have around. And obviously if you are developing an IoT solution… somone with better design skills than myself might be of use (though I do occasionally do an HMI or two).
I was recently at an industry conference, and one of the cooler things we saw was modular PLCs that have a custom backplane they plugin to that you just plug all your IO into with standarized cables and no need for screw terminals etc…
Edited 2018-05-18 00:22 UTC
Embedded system engineering is at an all time high. They’re currently trying to integrate systems into bigger operating systems such as cars (autonomous cars), or trying to implement highly customized modules for operating systems to control machinery on factories or research facilities.
The thing is that embedded systems are no longer working in a vacuum, and most Universities worth their salt, are teaching software engineering the embedded way first and then other approaches, since operating system development is these days tackled as embedded development (Linux especially).
I came here to reiterate this. Embedded software engineers are more in demand than ever.
I’m an embedded software engineer and it is so hard to find any decent collegue, it’s not funny anymore. I’ve been working here for 7 years and the last time we found someone (who luckily still works for us) who was actually up to the job was 5 years ago.
We’re serving over 4000 locations all over Europe, since long before people starting calling it IoT. Our company currently has 7 people: Three working in sales and administration, one frontend developer and one backend developer working on seperate future projects, and 2 software engineers.
Not sure. I focused my studies in embedded and real-time systems, yet the only jobs I could (and can) land are backend (because I detest web dev and mobile).
That said, my home is an IoT heaven, presence/temp/humidity feeds in every room, light/music/coffeemaking automation, neighbor presence (sniffing APs and clients around), all connected via WiFi, 433MHz, LoRa, nRF24L01+, some battery run, solar powered gadgets, running on anything between Arduino Nano to ESP32, with Arduino/NodeMCU/FreeRTOS behind. All centralized on a emq cluster, and fed to Home Assistant for automation. Plus I toy around with FPGAs, yet not much to brag about (abstract mapping temperature and presence around the house to a 32×32 RGB matrix).
When aiming for embedded jobs, my experience is limited to either FU because you have no professional experience, or FU because it’s military/avionics and we don’t like your country of origin.
With the rise of things like Arduino and the Raspberry Pi I find it hard to believe. Deep embedded programming has never been as accessible as it is now so this should translate in more people picking it up. I wonder however how this interacts with traditional embedded programming which often involved the use of proprietary tools and methodologies that are now considered outdated. Not knowing what companies expect today from an embedded software engineer it’s hard to tell.
But what I consider more important, is that we’re moving from having an embedded MCU doing all the work to a system with a “regular” Linux SBC that talks to an extremely simple PIC, ATmega, Cortex-M0 or whatever you use to interface with some hardware.
AFAIK it’s really that the “old” embedded developer had to know how to code Operating System drivers in a RTOS whereas right now in the Linux part you can interface with the external MCU from userspace, and in the MCU you don’t even need a RTOS, just a simple program with an infinite loop…
As hardware becomes more powerful and cheaper, we use what could be termed as overkill to solve simple problems.
TL;DR: Times change, we move to easier to implement solutions at the expense of complexity.
Edited 2018-05-18 08:47 UTC
Yeah, I guess we’ll just throw more hardware at the problem until “embedded” development will be basically just as high-level as application development… (cheaper than training legions of embedded engineers?)
I agree. It’s strange.
The more a job is described as being ‘in Tech’, the less likely that it involves any serious advancement or understanding of technology.
It is strange because the low-level stuff is genuinely more interesting.
No, actually hearing about how all the traffic systems are individually reflashed with new firmware isn’t interesting to most people (though maybe to us here!)
But as a human interest story it is or at least can be more interesting to hear. All the stress of putting it up against a ridiculous deadline; the lonely men in vans who have to spend all night driving around to fix strange errors otherwise the county would snarl up; the absurd tales of councils and governments blowing fortunes on bad systems; how because of some paperwork error the whole thing is still running on 80’s hardware and they have to send out to the far corners of the earth for the six semi-retired men and women who still know how to update it.
Most of the famous ‘started in a garage’ stories in Silicon Valley, which they make sure we all hear about, aren’t half as interesting or high-stakes as the stuff done in what is currently considered the less glamorous end of the industry.
Where is linux deficient compared to the things like MacOS? – at the low level parts interacting with the hardware or at the high level interface? I’d say the high level UI experience.
I know this isn’t embedded per se but I think it illustrates the point that some low level engineers perhaps don’t sometimes appreciate ( or do and so avoid! ) about higher level systems.
Turns out – as you go up the stack, the combinatorial complexity is what really hits you. It’s the reason Object Orientated programming exists. Turns out managing application state in the face of user interaction is also really hard – hence endless invention of patterns – MVC, MVP, MVVP, MVU, MVI.
It’s really hard to do great UI – you move from the logically designed system into the realm of having to do things like render text – different size letters with different spacing, unicode, umlats, chinese characters – then you need to deal with accessibility… the list goes on – soon as you do any time/date kind of stuff – the more you do the more complex it gets – timezones, calendars, leap seconds, changes in calendars on a whim etc.
The higher up you go the more you have to deal with the messy human world and yes there are more layers below that attempt to hide complexity, but often the layers below leaks complexity and contain bugs or problems you need to work around – the more there are of them, the more problems you potentially have.
Low level stuff is of course complex, but it’s constrained and there has at least been an attempt at good design.
Obviously it’s easier to work as an app developer than a
low level engineer, because so many apps are throw away and the tolerance for poor work at the embedded level is less as failure there crashes all the levels above.
However I’d argue to be a *great* developer higher up the stack is perhaps harder than being a great embedded engineer.
I’m a Windows kernel engineer, and I find the landscape and demand to be healthy and on the increase.
Perhaps that’s just because I work in the security sector, so my view of the world is slightly skewed, but in a world more focused on security and IoT, it makes sense that there’s a resurgence of low level engineers
Edited 2018-05-18 09:40 UTC
Same thing can be said about Big Data these days.
Back in the days knowing R / Matlab for analytics or putting in motion a beowulf cluster would usually bring you recognition in University during studies (i.e not so much in the workforce).
Now it’s more or less highly desirable skill in the workforce. Not many know this and companies have to resort with what they have ..
I’ve now been embedded developer in automotive industry for some time now and quite frankly the low level stuff is pretty much the same everywhere. You have your IC-s connected somehow either via I2C, SPI or just direct GPIO pin, you use UART for communication with outside world and thats pretty much about it from the pure hardware perspective. If you know the stuff behind those acronyms, pretty much any hardware or embedded project involves them and it is quite the same everywhere.
However what is the problem, is the electronics surrounding those IC-s that will play crucial role in how the software behaves on bare metal. For that you need the electronics skill and experience to see the issues and problems that might need workaround.
Then again we just saw Android Thing that is supposed to abstract all the above behind a nice set of API calls so you might as well hire just a regular Java developer to write a reasonable embedded solution on top of it.
So in the end, I see the opposite problem – the industry manages to hide the embedded quirkiness behind some software stack and from there a regulat developer will suffice. I totally see it happening with Android Thing which might finally reach a deep market penetration.
The dinosaurs who are digging through the hardware datasheets and meticulously counting the I2C communication delays from the schematics to figure out the highest safe communication speed, should come to an end soon.
I am happy with the project I currently work on as it is a software running on bare metal without any OS between and literally the C code is analyzed on assembly level to fine tune the execution. That hard environment is mostly caused by severe flash limitation we have in our hardware. Reasonably sophisticated control software must be squeezed into 16kB of flash space running on 8-bit MCU. And the reason for that limited hardware? The expected production volume over the lifespan of product. If you add $1 to a unit cost and plan to produce million of them, then you literally cut $1m off your bottom line and thats why more advanced and expensive hardware is the last resort if it is proven that the functionality cannot be squeezed into chosen platform.
In the end, its all about money in embedded IoT world
Edited 2018-05-18 12:00 UTC
Honestly, my current lament is simply “If I wanted to wire together badly documented black boxes, I would have become an Electrical Engineer”.
There’s an awful lot of just wiring high stuff today. Seems less and less “real” (for assorted definitions of “real”) development and instead it’s all integration work. And not really interesting integration work at that.
With risc-v, fpga and system in package. There is a lot of embedded gains to be made by customising the hardware.
So a full hardware engineer who can do low level software is coming important.
Then you are having some software developers having to work programming fpga to accelerating processing.
Yes there is a shortage of people who are embedded software engineers.
Lot of embedded software engineer role was to make chips you could not change do what you need it todo. We are now getting with fpga and other options where you can customise the chips. More devices are coming with fpga or custom made silicon inside. Doing this custom stuff is out side the historic roles of embedded software engineers as well.
Hardware engineering has come more cost effective than it use to be.
I remember when engineers understood hardware and the rest were programmers. We need to go back to that distinction and require engineers to be real.
unfortunately nowadays a guy who can hack up a website using a bunch of hipster frameworks calls himself a “coder”.
there is a definite shortage of people with true low-level know-how, not just programming but how things work exactly at this layer.