Here you’ll find my complete set of posts covering the Amiga Machine Code course.
The course consists of twelve letters and two disks, that can be found here. The letters are available as PDF’s in their original Danish language as well as translated to English.
Some light reading for the weekend.
One of my first paid freelance coding tasks was Amiga machine code for a userland Lottery application, in fact I still have the reference books covering Amiga Assembly and C Programming. Bizarrely, while pondering efficiency in IoT applications I was looking over these recently to remind me of the techniques and solutions we used when a few kilobytes was a lot of memory.
I became an embedded programmer starting from coding on Atari ST, So I know where you were. And the Amiga was mostly superior in every aspect to the ST, despite it has some tricks up its sleeves.
cpcf,
That’s cool. Unlike you and Kochise, I learned on x86 and never did anything on atari or amiga. Nevertheless, I’m still fond of those simpler times. I have major gripes with how inefficient modern programming has become. Of course this is a legacy mindset, but I still argue that we’re leaving a lot of performance on the table today. No matter how fast modern hardware is, it’s unbelievable that we’re still coping with performance problems. My day job is web development…ugh, modern platforms are so inefficient. I had to write a batch job for an ecommerce site in wordpress. Using it’s builtin API took 15minutes to run pegging the CPU+DB every hour, I rewrote it myself and it ran in a few seconds, no exaggeration. The official API is 99.5% inefficient. Magento is even worse. And sure, sometimes you can mask bad performance with caching, faster CPUs, SSD, etc, but it still boggles the mind how widely accepted bad programming practices are these days. I hear all the time how things aren’t worth optimizing anymore because developer time costs more than hardware, but good grief where does this end? And what about the end user time and costs multiplied over millions/billions of users?
I’ll go take my meds now…
Yes, I understand totally.
The Computer Scientists I worked with on Commodore64 and Amiga chose those platforms because apparently they had superior random number generation and the associated kernel to take advantage of it. My task was to take the historical data and implement a selection of numbers for lottery tickets using a biasing algorithm with a random number. Jokingly we called that bias Einstein’s Lottery Constant, a multiplier that varied slightly either side of 1 for each number in a draw. Back in those days lotteries ran every draw off the same set of numbered balls, after a few hundred draws anomalies became visible in the data with certain numbers being more likely due to weight or dimension bias in the tolerances of the numbers.. Our software and that of a few other groups around the globe had a good 12 to 18 months before legislative bodies changed the way draws were managed, introducing multiple sets of numbered balls and randomisation/scrambling of those sets. The key that made it possible was good random number generation. But it took them a while to catch up because the national lotteries in most countries had the method for selecting numbers hard etched in the law, Laws had to be changed before they could change how they did the draw. We didn’t make millions but we made better than bank interest. A couple of individuals, one of them a very well known mathematician, became quite wealthy because they charged fee on the syndicated tickets shares, we didn’t do that. Some were quite unscrupulous because they kept selling the system after it was obsolete, I’m not sure even a quantum computer would deliver a result these days because of the way the numbers are scrambled.