ReactOS as the open-source project striving for binary compatibility with Windows applications/drivers is still working away in 2022 on symmetric multi-processing (SMP) support.
Proper SMP/multi-core support is obviously critical for today’s hardware or even anything in the past roughly two decades… It’s also been a pain point for ReactOS, but fortunately the situation is improving.
We’re still looking at very early code that’s not even merged yet, but once it has – this would be a massive leap forward for the project.
Terrific article, stuff like this should be taught to kids, instead we teach them wordpress.
At university we would solve stuff like lunar landing simulations on similar DEC systems, it was all the rage in the 1970s, staring longingly into the abyss as the system spat out the results of our calculations in a deafening daisy wheel symphony. Less than a decade after the might of NASA had solved those problems with slide rules and a phalanx of mathematicians, one engineer could sit at a terminal and come up with a valid result in an afternoon. Then in a decade further, I was hacking the PC-4 (PB-100) and doing the same simulations on a gadget I could stuff in my pocket, just to prove to myself I could!
Now on a $10 STM / ARM Dev board kids can emulate both and publish a website showing the world the results.
Today it would be realistic to solve the problem with nothing more than a self-learning neural network.
A little further into the future it should be possible to evolve an artificial lifeform that capable of completing a full lunar mission in VR.
And some day past that it will be done in reality instead of VR.
So many disciplines that were extremely cutting edge and demanding at one point could become completely obsolete, haha.
“nothing more than a self-learning neural network.”
The problem with that is YOU then don’t know how to solve it yourself… you just know that a bunch of computations occurred and it happened.
Also it is almost always the case that you can solve a problem with very few calculations if you know a good algorithm… really neural networks only make sense when you don’t know a good solution or you need your solution to include some plasticity. In pretty much all other cases they are a waste of power.
cb88,
It depends how much we end up simplifying the problem in order to keep it easy for humans to compute. But as systems become more complex and we introduce increasingly complex models to represent them, the simple and elegant solutions have more and more caveats and become less accurate.
I think a NN that can simulate millions/billions of missions virtually may be in a better positioned to actually avoid dangerous scenarios and acting to quickly mitigate them when they happen before things become more serious. Like when Elon musk’s rocket ran out of fuel and crashed. a human mission running human algorithm did that but it shouldn’t have happened. Or when Boeing 737 Max sensors failures caused multiple plans to dive from the sky, human built algorithms did that.
I predict that engineers will make the transition from writing algorithms to coming up with as many failure modes as possible and feeding them into simulations to train a NN. This will not only save development time, but it will be able to come up with superior contingency plans than we can.
Just as computers have beat the top humans at “go” and jeopardy, I expect they will ultimately beat us at planning complex space missions too.
Boeing 737 Max…. can of worms there huh. That one failed because it was virtually impossible for it to “succeed” due to Boeing itself was forcing something to work that wouldn’t have been designed that way otherwise. And “appear to function” in a way that it did not in reality.
Aka they created convoluted mess and it bit them in the rear because they couldn’t understand the intricacies… this is only solvable with KISS principles no amount of Neural networks can save a company hell bent on eeking out dollars rather than good engineering.
cb88,
You’re talking about the engine placement, yes that was a constraint that wouldn’t have been engineered this way if they had the choice. Boeing officially blamed the pilots for the failures, but I think most if not all of the blame was Boeing’s. The pilots were literally fighting the algorithms that were pulling the took the planes down shortly after takeoff. The other pilots who experienced the symptoms but didn’t ultimately crash were extremely lucky.
I’m all for following KISS engineering principals, but arguably a neural net that had flown billions of virtual flights could have saved those planes. Those neural nets could be trained against failure modes of every single sensor and system on the plane in both expected and unexpected combinations. Keep in mind it was ultimately an extremely trivial failure mode that took those planes down.
Anyways, I don’t think a NN should be necessary just to fly a plane. The discussion has drifted beyond the original point. haha.
How would you have simulated that failing sensor that even Boeing couldn’t figure out what the cause was ?
Kochise,
You automatically simulate a failure on every sensor input in the system. This should include the sensor being offline but also sending erroneous data. This way the NN can be trained to successfully accomplish it’s mission despite the faulty input. Of course there are limits to how much can fail but regardless a NN that was trained against millions of failure modes would likely have more success in dealing with them than the humans pilots who received little to no training on that failure mode. Even trained pilots can make mistakes due to information overload. Not to mention response times.
Of course the big difference is that humans have a great deal of “generic experience” that computers lack entirely inside the bounds of their simulations. But the quality of simulations is rapidly improving so to me it’s not a question of if computers can beat humans but when.
How and for how long does BeOS/Haiku deals with SMP on x86 ?
BeOS… dealt with SMP on AT&T hobbit in 1990 (untill its discontinuation in 1993), then it moved to PPC, then x86 in around March 1998….
Microsoft released SMP capable windows NT 3.1 in around mid 1994… as did OS/2 and Solaris 2.4.
Also people here saying they ran 3.1 with 8 processor machines:
https://www.os2museum.com/wp/nt-3-1-smp/comment-page-1/