InfoWorld’s Galen Gruman highlights 18 technologies that remain core to the computing experience for IT, engineers, and developers 25 to 50 years since their inception. From Cobol, to the IBM mainframe, to C, to x86, these high-tech senior citizens not only keep kicking but provide the foundations for many vital systems that keep IT humming.
Interesting topic but this is Infoworld. I don’t click on Infoworld links. I’m 100% sure they put one technology per page, that the text is one sentence long and covers about 10% of the page, the rest been filled with adverts.
The osnews comments may be interesting but Infoworld is not worthy of my time.
FTFA:
1. Cobol: 1960
Developed by a government/industry consortium, the Common Business-Oriented Language became the standard for financial and other enterprise software systems, and is still in use today in legacy systems that power many government, financial, industrial, and corporate systems.
2. Virtual memory: 1962
A team of researchers at the University of Manchester working on the Atlas project invented a way for computers to recycle memory space as they switched programs and users. This enabled the time-sharing concept to be realized.
3. ASCII: 1963
The American Standard Code for Information Interchange, which defines how English-language letters, numerals, and symbols are represented by computers, was formalized in 1963. Today, it’s been extended from 128 characters to 256 to accommodate accented letters, and is being replaced by the multilingual Unicode standard (created in 1988), which still uses the ASCII codes at its core.
4. OLTP: 1964
IBM invented OLTP (online transaction processing) when it created the Sabre airline reservation system for American Airlines. It linked 2,000 terminals (via telephone) to a pair of IBM 5070 computers to handle reservations processing in just seconds. The fundamental OLTP architecture is in use today in everything from e-commerce to, well, airline reservations.
5. IBM System/360 mainframe: 1964
It cost IBM $5 billion to develop the family of six mutually compatible computers and 40 peripherals that could work together, but within a few years, it was selling more than 10,000 mainframe systems a year. The System/360 architecture remains in use today as the backbone for current IBM mainframes.
6. MOS chip: 1967
Fairchild Semiconductor invented the first MOS (metal-oxide semiconductor), the technology still used for computer chips today in the form known as CMOS (complementary metal-oxide semicondctor). The original Fairchild CPU handled eight-bit arithmetic. Note: Jack Kilby created the first integrated circuit at Texas Instruments in 1958, using a different process based on silver.
7. C: 1969
Bell Labs’ Dennis Ritchie designed the C programming language to use with the then-new Unix operating system. The C language is arguably the most popular programming language in the world — even today — and has spawned many variants.
8. Unix: 1969
Kenneth Thompson and Dennis Ritchie at Bell Labs developed the Unix operating system as a single-processor version (for use on minicomputers) of Multics OS, a multiuser, multitasking OS for time sharing and file management created earlier in the decade for mainframes.
9. FTP: 1971
MIT student Abhay Bhushan developed the File Transfer Protocol (first known as the RFC 114 draft standard). He later helped develop the protocols used for email and the ARPAnet defense network.
10. Ethernet: 1973
Robert Metcalfe (later InfoWorld’s publisher and then longtime columnist) invented the networking connection standard, which became commercialized in 1981. Its successors are now a ubiquitous standard for physical networking.
11. x86 CPU architecture: 1978
Intel’s 8086 processor debuted what became known as the x86 architecture that today still forms the underpinnings of the Intel and AMD chips used in nearly all PCs, including those that run Windows, Linux, and Mac OS X.
12. Gnu: 1983
Richard Stallman, who later formed the Free Software Foundation, didn’t like the notion of software being controlled by corporations, so he set out to produce a free version of AT&T’s Unix based on the principles espoused in his book “The Gnu Manifesto.” The result was Gnu, an incomplete copy that languished until Linus Torvalds incorporated much of it in 1991 into the Linux operating system, which today powers so many servers.
13. Tape drive: 1984
IBM’s 3480 cartridge tape system replaced the bulky, awkward tape reels (both are shown here) that defined computer storage since the 1960s with the enclosed drive systems still in use today. IBM discontinued the 3480 tape cartridge in 1989, but by then its format was widely adopted by competitors, ensuring its survival.
14. TCP/IP: 1984
Although adopted by the military’s ARPAnet in 1980, the first formal version of the TCP/IP protocol was agreed to in 1984, setting the foundation for what has now become a universal data protocol that undergirds the Internet and most corporate networks.
15. C++: 1985
When AT&T researcher Bjarne Stroustrup published “The C++ Programming Language,” it catapulted object-oriented programming into the mainstream, forming the basis for much of the code in use today.
16. PostScript: 1985
John Warnock and Charles Geschke of Adobe Systems created the PostScript page description language at the behest of Apple co-founder Steve Jobs for use in the Apple LaserWriter. PostScript was an adaptation of the InterPress language that Adobe created in 1982 for use in laser printers, which were beginning to emerge from the labs into commercial products. PostScript is still used in some printers today, but its primary function is as the foundation for PDF.
17. ATA and SCSI: 1986
Two pivotal and long-lasting data cabling standards emerged the same year: SCSI and ATA. The Small Computer Systems Interface defined the cabling and communication protocol for what became the standard disk connection format for high-performance systems. SCSI originated in 1978 as the proprietary Shugart Associates System Interface and competed with the ATA (aka IDE) interface that also debuted in 1986 with Compaq’s PCs, but the ATA specification was not formally standardized (under the ATAPI name) until 1994. SCSI today is mainly used in server storage, whereas ATA has been continues to be used in desktop PCs in both parallel (PATA) and serial (SATA) versions.
Thanks for that.
Omfg!
So Linus Torvalds incorporated GNU into its Linux operating system?
That is another reason why I don’t click on Infoworld links. They write complete crap.
Yes, that is questionable. It would be better to say that Linus Torvalds created the first GNU distro. Earlier, there were no kernel for the GNU operating system – so the finish student Linus Torvalds filled in that gap and created the first GNU distro. Then other GNU/Linux distros spawned, of course.
Regarding IBM Mainframes, it always surprises me that they still live, as the IBM Mainframe cpus are much slower than a fast x86 cpu. And the biggest IBM Mainframe z196 have 24 of these slow cpus, and it costs many 10s of million USD. If you have a 8-socket x86 server with Intel Westemere-EX then you have more processing power than the biggest z196 IBM Mainframe. You can emulate IBM Mainframes on you laptop with the open source emulator TurboHercules.
Here is the z196 cpu, which IBM dubbed “Worlds fastest cpu” last year. It has 5.26GHz and almost half a GB of cache (L1+L2+L3), but still it is much slower than a fast x86 cpu. How could IBM fail so miserably with the z196 transistor budget?
http://www-03.ibm.com/press/us/en/pressrelease/32414.wss
And the old COBOL. It’s not particularly sexy or hot. It is boring and ugly, only used on old dinosaurs (i.e Mainframes).
Thats why there are tons of cobol jobs available right now.
If you’re lucky, you‘re the one who writes the salary into the working contract. 🙂
COBOL is a language that suffers from the fact that nearly nobody can (or wants to) remember it. It’s not in the scope of university or professional education. Those who are skilled in it are “old men” today. There are only few young people willing and able to code in COBOL, as it doesn’t seem very attractive at first sight. But the number on the bill may justify diving more into that language that is a domain of the “greybeards”. That’s why it’s traditionally not considered “modern” and therefore uninteresting. It still runs many important infrastructures which their users and owners are possibly about to lose control of. So they are willing to spend interesting amounts of money for those who can keep the things running.
Now try to guess what this is:
Eytsh. Eff input, I pee ieh, eff, eighty eighty … (pause) read fourty. Eff output, oh, vee, hundredthirtytwo hundredthirtytwo, of printer.
I input, ah ah, zero one. I … (long pause) … one twelve hello. Oh output, tee, two, ell err. Oh … (pause) hello twelve.
Got it?
It’s RPG (Report Program Generator), a programming language that’s also still around. You find it in accounting and rental services. If I remember correctly, this one even dates back to the 50’s. The example above is “hello world”.
Yeah well, being a porn star pays well to. Doesn’t mean it’s a wise career move.
That’s one fugly language o_O
Mainframes are supposedly about throughput of many concurrent transactions, I/O; lots of stuff is offloaded to coprocessors. Central Processing Unit seems to have slightly different meaning in them (mainframes don’t really seem to be strictly about raw CPU crunching)
Then there are claims about reliability, on-line repairs and upgrades, verifiability (they seem to basically do everything two times and compare the results, at minimum? That ought to slow things down), or security (apparently stemming from few choices about overall architecture of the machines; also can’t help, at the least, with raw CPU number crunching)
Edited 2011-08-12 12:35 UTC
Yes, but maybe we can agree that IBM Mainframes have slow cpus, inferior to x86 cpus. So, you dont buy IBM Mainframes for their cpus, because they are not suited for raw number crunching. Any cheap x86 cluster is much faster at raw number crunching.
Yes, IBM has made lot of claims. For instance, the biggest Mainframe can virtualize 1.500 x86 servers. I dug more into this, and it turned out that IBM assumed all x86 servers idle, and the Mainframe is 100% loaded. That makes sense, because if any x86 server start to do some work, then the Mainframe can not cope with the fast x86 cpus.
But in my opinion, this is false marketing. Because I can boot up 10 mainframes on my PC. But it would be false of me to say “my PC can virtualize 10 IBM Mainframes”. Dont you agree?
Also, there are claims that IBM Mainframes can handle 400.000 clients. I would not be surprised if IBM assume all clients to idle, and only a few of them does real work. Judging from the example above.
Regarding the Mainframes reliability. One of the largest train installation in Scandinavia: the CEO said “our IBM mainframe crashed recently, that is strange. It normally never crashes. Last time it happened were six years ago”. This is a contradiction. It seems their mainframe crashes every 5 year. I can google that article if you want to read. Just tell me.
So, what, you’re now essentially saying that you’ve built your initial critique of their CPU raw numbers on s straw man? I don’t see anybody promoting mainframes for cluster (“supercomputer” now, it seems) number crunching.
I wasn’t even taking a side in this, none has any particularly warm place in my heart, just adding a fuller picture to your – at best – snippet. Even doing so in an extremely neutral tone (as you sort of noticed…*), since a lot of people on the web doesn’t feel the concept of pointing out some inaccuracy without taking sides (*…but also took it further; plus the claims I mentioned are not solely by IBM)
Mainframe have never been marketed for CPU-intensive tasks. They’re highly reliable machines designed to handle large-I/O applications, which is why you seem them used for airline reservation systems and other similar things which require low latency and a large number of concurrent users.
So we agree then that Mainframe cpus have weak cpus? That Intels latest high end server cpus are much faster?
Of course I also agree on the rest of what you say. I have never denied that Mainframes have superior I/O. You can not find any posts from me, saying something else. Because I know Mainframes have good I/O. I have never disputed that. 296.000 I/O channels and lot of I/O cpus help a lot.
Mainframes use a number of decades-old tricks to optimize multiple concurrent connections and minimize transaction overhead.
At least in our case, terminals (usually a PC-based UTS emulator like UTS Express, PEP, or Liaison) are using synchronous I/O. Data is only sent on the network when a Transmit key is pressed or a special key like a function key is pressed, unlike Vxxx terminals which flood the network with packets every time a key is hit. That’s a waste of resources. UTS terminals are also smart enough to handle some editing locally and to only send those portions of a screen which have changed. I think 3270’s can operate in synchronous mode, not sure.
Programs don’t generally request memory dynamically at all … there is no MALLOC. A program is allocated a fixed block to use, period, and the OLTP transaction environment allocates that block to each program at load time. If you need more, you write more than one program or routine and chain them together, using buffers (DBAs in USAS/HVTIP, temp files like ONLINEFAST2A in TIP), to pass data between program segments if it won’t fit in the COMPOOL buffer or equivalent. Applications don’t run resident (only the scheduler does) but can can be cached in memory (data area, instruction area, both) to reduce load times. The OLTP scheduler directs control to various programs as required. Commonly used files are resident in cache.
(There’s an OS layer underneath called the EXEC which maintains paged virtual memory, but application programs are completely unaware of its existence. They don’t need to know. All they know is the OLTP environment in which they run.)
The OS (in this case OS 2200) has a higher granularity of process control than UNIX generally does, with multiple classes of process priorities (HIGH EXEC, LOW EXEC, TIP/OLTP, BATCH, DEMAND (interactive userland), and with each of those areas subdivided into 64 process priorities. Stuff that needs priority gets it … this ensures that some slow batch process doesn’t touch OLTP performance.
It’s a fascinating world. Please don’t assume that mainframes are low tech. Much of the tech is old, yes, but there are reasons why those concepts still work well. Mainframe OSes like OS 2200, MCP, and zOS can be very different from UNIX and were designed to solve somewhat different problems.
Weird to reply to myself… Programs on the mainframe can obviously request memory in most instances, but when doing OLTP that is not allowed. The system handles it, so things like memory leaks caused by OLTP application programs simply don’t exist.
If you want to write a vanilla C program, tho, you can do it. But it won’t be running under the OLTP subsystem unless you agree to use its built-in memory management, its I/O subsystem, etc.
Edited 2011-08-14 01:15 UTC
There’s no contest. The Unisys Clearpath mainframes we use (using an airline-focused OLTP environment called HVTIP) are able to parse a screen, do several I/Os to various files, and return a response to the customer in under 30 milliseconds, and can do that with a load of more than 600 of those online transctions per second.
I’m sure IBM’s OLTP architecture (TPF?) is similar. The Unisys boxes use multiple IPs and IOPs, very fast files mapped into cache, and databases (in our case something called “freespace files”) which use sets of preallocated fixed-length records and are lightning fast compared to anything relational.
Yes, Mainframes are good at one thing. I have never said Mainframes are slow at everything. I only talk about their cpus. For isntance, Mainframes have 296.000 I/O channels, I heard. Anyone understands that a x86 can not match that regarding I/O.
But I was talking about the Mainframe cpus. They are slow. If you pit your mainframe against a Westmere-EX x86 cpu, then your mainframe that costed x million USD, would bite the dust. “there is no contest”.
I dont understand people that dont understand what I claim. I have ONLY said that Mainframes cpus are weak. I dont understand why people would want to say “hey, our Mainframe can handle lot of I/O – you are wrong!”. No, I am not wrong. I have never talked about their I/O, I have been talking about their CPUs. Please read my posts again.
Their CPUs may be slow by some measures, but if that was a concern, the architecture would simply change.
Mainframers like myself never speak in terms of CPU power because that is not a design consideration for a mainframe system. Bringing it up serves no purpose, really. What’s the point? Get a supercomputer if you want to do number crunching .. that’s what they’re designed for.
Your statement about CPUs is much like criticizing a freight train because it can’t accelerate quickly. Yeah, that’s true. So?
Well, my unscientific benchmarks beg to differ.
Our AIX System moves data around at a rate that Intel can only dream about.
Our workloads (using the same software) run getting on for an order of magnitude better on a twin Power 7 core than on the latest i7 based server.
Well, the latest Intel Westmere-EX is only ~10% slower than the POWER7. But at a fraction of the price. Here are some benchmarks:
http://www.anandtech.com/show/4285/westmereex-intels-flagship-bench…
Next year, the IvyBridge version will arrive. It will be 40% faster than the Westmere-EX, according to Intel.
Thus, x86 is catching up on POWER really fast, and next year surpasses POWER7. There are much more research in x86 (Intel and AMD), than in POWER (only IBM). Of course x86 will surpass POWER at some point in the future.
Earlier, the IBM POWER6 servers were several times faster than x86, and the POWER6 servers costed 5-10x more than x86 servers.
Today, IBM POWER7 is ~10% faster than x86 servers, and POWER7 costs only 3x more.
In the future, will the IBM POWER8 be at par with x86, or even slower? With such a slow cpu, the POWER8 must have a price similar to x86, or even be cheaper. We all know that IBM only does high margin business. If the POWER8 is as cheap as x86, why would IBM continue to invest billions in POWER cpus? IBM will start to loose money on POWER8.
Most likely, IBM will kill off POWER cpus. Just as IBM recently killed the CELL cpus. When POWER is killed off, then AIX will be killed too. AIX runs only on POWER. Coincidentally, IBM has publicly said that AIX will be killed off and replaced with Linux. This will happen when IBM start to loose money on POWER, when x86 has surpassed POWER.
http://news.cnet.com/2100-1001-982512.html
Not only the HW but the Software as well. Solid as a rock and very little risk of going down.
I did an implementation from IBM ERP to then Oracle on Sun platform. Needless to say there were 100s of bugs in Oracle. That was some time ago. What a nightmare…
Well, you can’t really disassemble that Intel server and move it to another location without taking it offline. With IBM’s mainframes, you can do just that, a piece at at time, with mere seconds of downtime.
Is this a technology or more of a concept in itself? A bit like “CPU” (or some of its typical basic blocks at least); might kinda apply also to:
An OLTP subsystem is a specific environment within a mainframe that is different from the more traditional interactive and batch environments a mainframe also has (and which are conceptually similar to a UNIX shell and shell scripts).
It represents a very specific way of designing, writing, and executing software. Very controlled.
My coding experience is limited to the Sperry and Burroughs flavors of OLTP, not IBM’s, but I suspect they are all similar in some elements of their basic
OLTP is an event-driven subsystem running in addition to the base OS which is dedicated to the very fast loading, execution, and termination of small single-purpose programs under the control of a central scheduling service.
This scheduling service is the only thing which really stays resident as such … it generally parses the first few bytes of each incoming message (often known as a “transaction ID” or “search ID”) and uses that information to reference a table which determines which OLTP program should be started to address the issue, often by program number.
One simple transaction program build a screen on the screen when called, then exit … maybe the current weather, or a flight status display, or just a fill-in mask (form) prompting the user for more information.
Another transaction program might parse a screen that has been resubmitted with bits of data added, and then generate some form of positive or negative response before terminating.
A third might receive a message from another system (no user interface at all), store it in a defined manner, and exit. Or massage it and pass it along to a third system.
A fourth might start when a timer triggers, perform some tasks, and then exit. A lot of housekeeping runs are controlled in that way. Similar to, but predating things like crontab.
Programs don’t hang around. Usually. They might be configured to remain resident in memory for speed of access, and multiple copies might be kept in that state to speed things along, but control is never passed to them until the scheduler says so. It’s purely a push technology.
Similar subsystems probably exist in UNIX environments somewhere, but I’ve not seen them. Tuxedo can be a little similar in some respects, since a Tux server tends to push messages to resident Tux services, but that sort of message passing is the only bit that’s the same. Tux programs are still run from a standard shell, request memory from the OS, etc., while OLTP programs don’t really work that way.
Hard to explain in a short note. Just like it’s hard to explain the concept of “freespace files” to someone who only does relational databases. If you only knew how slllooowwww an RDMS is compared to other established systems…
Interestingly, the IBM OLTP environment was developed for American Airlines, while the UNIVAC/Sperry solution was developed for United Airlines and Air Canada. Not sure where MCP transactions started, but I suspect either an airline or a bank.
Edited 2011-08-15 21:27 UTC
Thanks, great
These technologies indeed rule! As geek say: “If it ain’t broke, don’t fix it”
Edited 2011-08-12 15:01 UTC
Then you probably dont read newspapers, or magazines.
or you have a bandwith problem.
If scripts is a problem get “no script”.
At least infoworld’s journalists will eat.
Edited 2011-08-14 11:49 UTC
how can it be that the punch card didn’t make it on the list?
invented in the 18th century and still around and kicking in the form of optical disks
I’d say that the longest-standing computer technology ever is the Cloud. There have been clouds on this Earth almost since there has been water.
Edited 2011-08-13 18:25 UTC
That is so funny!
I see you hold the Cloud in just as high an esteem as I do. Welcome to The Resistance!
Obviously the InfoWorld guys don’t have much of a clue when they include Cobol and tape drives in their “survivor” list but not Fortran. Seems their paying readers include business people but not scientists.
Ever wondered what software is running those supercomputer supernova Ia simulations?
I would agree. There is still a LOT of Fortran code being used … and new software being written … in the airline industry and other places. Heck, I still help to maintain some. 🙂
I also agree as I’ve read a few months ago that Intel is still making and releasing Fortran compilers…
I didn’t know about those simulations. Good to know.
Actually, I’ve heard that large institutions like the CERN have a problem with Fortran.
Today, there is not much of a performance benefit in using Fortran instead of C any more. The reason why these institutions still have to use it, despite people willing to program stuff in such an ugly language being increasingly hard to find, is that they have a huge library of Fortran code around. A rewrite of that would take decades already, and will take more in the future.
Compatibility really is a huge thing…
Citation needed. Maybe you got confused with their problem of finding the Higgs boson?
You got benchmarks? Some astrophysical simulation written in fortran against the same thing written in C?
Fortran is easy to learn. That’s why most scientists who engage in number crunching use it like an everyday tool and not like a shiny/beautiful asset to show off.
You assume that recently built supercomputers are running fortran code that’s decades old?
And that’s why a google search with (+fortran supernova simulation) yields 136000 no-nonsense hits?