If you’ve followed any one of the amazing tutorials on how to set up a mainframe on a conventional personal computer, you’ve probably noticed they end with the login screen as if everything beyond that point will be intuitive and self-explanatory to newbies. I mean… That was my assumption going into this project. I’ll figure it out. How hard could it be? Maybe it would take me a few hours. Maybe I’d have to Google some stuff… Read some documentation…
It took me over a week.
Over a week to figure out enough to compile and run a basic program.
“IBM has so much documentation (about z/OS) and that documentation is so dense that it feels impossible for even Google to penetrate.”
LOL
And yet, from experience, let me tell you Google still does a better job of finding the information you need than any of IBM’s internal search systems.
Their “InfoCenter” is anything but.
JCL is pretty much wizardry. Not because of the weird rituals, but because JCL doesn’t get written but gets passed down from old employee to young employee, everyone has their collection of inherited and crafted JCL, which they pass on to new employees and so on and so forth. You’re never really sure why it works, and you may accidentally summon demons every once in a while.
We’re look at 50 years of computing history and momentum here.
These are systems built for an earlier age, but who’s concept and idioms remain.
Just like “those French have a different word for everything”, so does IBM.
I have no direct experience with these systems. I’ve always lamented a bit that I never did, as I think it would have been very interesting, especially the AS/400. The only mainframes I worked on were CDC, and, as different as they were, there weren’t as different as IBM was.
It would have been interesting for the same reasons C coders should learn Lisp. Completely different approaches to similar problems.
The comments about moving the cursor around the screen is because she’s working with a block mode terminal, not a typical “glass tty”. The entire page is sent to the computer, (well the fields are), not individual keystrokes. One can readily see the benefits of having local editing when every CPU and interrupt cycle was important compared to hammering some poor machine with typos and back spaces and such over 300 baud connections. Less so today with our abundance to bandwidth and idle CPU, but there are times it sure would be worth it when you have a laggy SSH connection.
Mosh provides something similar (local echo, transmitting state changes rather than a raw byte stream) for SSH terminal connections, although of course you can’t use it for tunnels.
https://mosh.org/
I go back further than than to an IBM System 3 Model D and OCL which was before JCL and wrote in COBOL, RPG II, FORTRAN IV and some version of BASIC.
It was much easier to copy the format of a COBOL program and just delete what you didn’t need (throw away cards you didn’t need or save them for another time in a physical file folder) and make your next program. It saved me a lot of time and got me down to creating what was different.
Then we went to TAB machines with 8†floppy disks and you typed everything onto them and then loaded from the floppy into the mainframe. MUCH better. Except that the TAB machines had their own weirdness to them.
Two steps forward and some fraction of .001 to two steps back. It was never just “forwardâ€.
Then I moved from IBM to an HP3000.
Fun fact. You could literally power off an HP 3000 in the middle of almost ANYTHING and turn it back on and it would start up where it left off. And if you typed to write from a tape drive to the console it would lock up an HP3000. Which is why you would want to turn it off and turn it back on. It would then see what the last command was and not run the bad command and things would go from there.