I’m covering the topic of FreeRTOS and interrupts in my university lecture material. But I have seen so many wrong usage of interrupts with the RTOS that I think it deserves a dedicated article. The amazing thing I see many times: even if the interrupts are configured in a clearly wrong way, surprisingly the application ‘seems’ to work, at least most of the time. Well, I think everyone agrees that ‘most of the time’ is not good enough. Because problems with interrupts are typically hard to track down, they are not easy to fix.
If a dedicated meta-CPU? committed to that.
General Development, also Erlang like languages could benefit a lot. Maybe machine training.
Imagine two identical CPU running exactly the same code, on the same data. But out of phase by a few seconds.
Then a third CPU. Only job being to oversight the other two.
“Hey! Your twin has just run over a precipice. Compute alternate ways.”
Once the alternative is taken the twin becomes instanced with the alternative and becomes trailing CPU.
Less freezings? Blue death screens?
You’re seriously asking for a CPU to magically just know what the programmer intended, instead of what the programmer wrote, and to have an insanely advanced built-in AI that is then capable of taking the programmer’s intention into account and writing completely new code for itself, fully autonomously?
You’re right WereCatf. So cryptic my comment. I want the ‘oversight’ unit just to trace, to show the future. ‘Alternate ways’ is an intelligence issue. Instead of freezing, such a setting could show the problem ahead to the user and offer -hopefully non catastrophic options.
That ‘oversight’ unit could be more kind of a ‘stream’ processor. Unable [by design] to pause or stop.
Remembering that scene at Avatar where They would just suddenly fall, catatonic. That is an unacceptable behavior for any kind of ‘oversight’ unit.
Just use modern debugging tools like Trace 32, so you can see what you want. If they don’t have it you can write script to achieve desired functionality.
I weep for today’s programmers. The whole article is merely a reprint of the interrupt chapter of the ARM hardware manual while the author points out the obvious. Everywhere he calls something “confusing” is something fairly common to old-school programmers who grew up programming in assembly. He’s clearly a C++/C# programmer lost in the world of assembly coding and direct hardware access.
Well, not really. Other CPU architectures works a bit differently on that aspect. Try the 680×0 line, especially starting with the 68020.
However things are rather precisely customable with the ARM, at the expense of simplicity. The problem is that only 5% of usage cases will need this level of complexity.
Yes, really. Granted the ARM INTC is more like the INTC in PCI or newer based PCs, but it’s still all old-hat for us old-school programmers. He’d fry his brain trying to work out the old chained INTC controllers from the old AT PCs.
The 680×0 int dispatch mechanism is definitely different from the ARM or x86, but it’s still not all that confusing. I programmed int handlers on both the Amiga and Mac lines back when they had 680×0 processors. It’s not hard if you read the manuals. Motorola’s manuals in particular where very easy to read and understand. Intel’s were a joke, and sometimes wrong… lots of erratas on Intel manuals.
Don’t even try reading ST (stm32) manuals and datasheets if you want to keep faith in humanity.
Motorola’s were the best I ever read, by far. Logic, progressive, with many explanations, chronograms, tables and WORKING examples.
Edited 2016-08-17 05:31 UTC