Linked by snydeq on Fri 12th Aug 2011 03:55 UTC
OSNews, Generic OSes InfoWorld's Galen Gruman highlights 18 technologies that remain core to the computing experience for IT, engineers, and developers 25 to 50 years since their inception. From Cobol, to the IBM mainframe, to C, to x86, these high-tech senior citizens not only keep kicking but provide the foundations for many vital systems that keep IT humming.
Thread beginning with comment 484715
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: Infoworld
by steampoweredlawn on Fri 12th Aug 2011 07:25 UTC in reply to "Infoworld"
steampoweredlawn
Member since:
2006-09-27

FTFA:

1. Cobol: 1960
Developed by a government/industry consortium, the Common Business-Oriented Language became the standard for financial and other enterprise software systems, and is still in use today in legacy systems that power many government, financial, industrial, and corporate systems.

2. Virtual memory: 1962
A team of researchers at the University of Manchester working on the Atlas project invented a way for computers to recycle memory space as they switched programs and users. This enabled the time-sharing concept to be realized.

3. ASCII: 1963
The American Standard Code for Information Interchange, which defines how English-language letters, numerals, and symbols are represented by computers, was formalized in 1963. Today, it's been extended from 128 characters to 256 to accommodate accented letters, and is being replaced by the multilingual Unicode standard (created in 1988), which still uses the ASCII codes at its core.

4. OLTP: 1964
IBM invented OLTP (online transaction processing) when it created the Sabre airline reservation system for American Airlines. It linked 2,000 terminals (via telephone) to a pair of IBM 5070 computers to handle reservations processing in just seconds. The fundamental OLTP architecture is in use today in everything from e-commerce to, well, airline reservations.

5. IBM System/360 mainframe: 1964
It cost IBM $5 billion to develop the family of six mutually compatible computers and 40 peripherals that could work together, but within a few years, it was selling more than 10,000 mainframe systems a year. The System/360 architecture remains in use today as the backbone for current IBM mainframes.

6. MOS chip: 1967
Fairchild Semiconductor invented the first MOS (metal-oxide semiconductor), the technology still used for computer chips today in the form known as CMOS (complementary metal-oxide semicondctor). The original Fairchild CPU handled eight-bit arithmetic. Note: Jack Kilby created the first integrated circuit at Texas Instruments in 1958, using a different process based on silver.

7. C: 1969
Bell Labs' Dennis Ritchie designed the C programming language to use with the then-new Unix operating system. The C language is arguably the most popular programming language in the world -- even today -- and has spawned many variants.

8. Unix: 1969
Kenneth Thompson and Dennis Ritchie at Bell Labs developed the Unix operating system as a single-processor version (for use on minicomputers) of Multics OS, a multiuser, multitasking OS for time sharing and file management created earlier in the decade for mainframes.

9. FTP: 1971
MIT student Abhay Bhushan developed the File Transfer Protocol (first known as the RFC 114 draft standard). He later helped develop the protocols used for email and the ARPAnet defense network.

10. Ethernet: 1973
Robert Metcalfe (later InfoWorld's publisher and then longtime columnist) invented the networking connection standard, which became commercialized in 1981. Its successors are now a ubiquitous standard for physical networking.

11. x86 CPU architecture: 1978
Intel's 8086 processor debuted what became known as the x86 architecture that today still forms the underpinnings of the Intel and AMD chips used in nearly all PCs, including those that run Windows, Linux, and Mac OS X.

12. Gnu: 1983
Richard Stallman, who later formed the Free Software Foundation, didn't like the notion of software being controlled by corporations, so he set out to produce a free version of AT&T's Unix based on the principles espoused in his book "The Gnu Manifesto." The result was Gnu, an incomplete copy that languished until Linus Torvalds incorporated much of it in 1991 into the Linux operating system, which today powers so many servers.

13. Tape drive: 1984
IBM's 3480 cartridge tape system replaced the bulky, awkward tape reels (both are shown here) that defined computer storage since the 1960s with the enclosed drive systems still in use today. IBM discontinued the 3480 tape cartridge in 1989, but by then its format was widely adopted by competitors, ensuring its survival.

14. TCP/IP: 1984
Although adopted by the military's ARPAnet in 1980, the first formal version of the TCP/IP protocol was agreed to in 1984, setting the foundation for what has now become a universal data protocol that undergirds the Internet and most corporate networks.

15. C++: 1985
When AT&T researcher Bjarne Stroustrup published "The C++ Programming Language," it catapulted object-oriented programming into the mainstream, forming the basis for much of the code in use today.

16. PostScript: 1985
John Warnock and Charles Geschke of Adobe Systems created the PostScript page description language at the behest of Apple co-founder Steve Jobs for use in the Apple LaserWriter. PostScript was an adaptation of the InterPress language that Adobe created in 1982 for use in laser printers, which were beginning to emerge from the labs into commercial products. PostScript is still used in some printers today, but its primary function is as the foundation for PDF.

17. ATA and SCSI: 1986
Two pivotal and long-lasting data cabling standards emerged the same year: SCSI and ATA. The Small Computer Systems Interface defined the cabling and communication protocol for what became the standard disk connection format for high-performance systems. SCSI originated in 1978 as the proprietary Shugart Associates System Interface and competed with the ATA (aka IDE) interface that also debuted in 1986 with Compaq's PCs, but the ATA specification was not formally standardized (under the ATAPI name) until 1994. SCSI today is mainly used in server storage, whereas ATA has been continues to be used in desktop PCs in both parallel (PATA) and serial (SATA) versions.

Reply Parent Score: 18

RE[2]: Infoworld
by spiderman on Fri 12th Aug 2011 09:41 in reply to "RE: Infoworld"
spiderman Member since:
2008-10-23

Thanks for that.


12. Gnu: 1983
Richard Stallman, who later formed the Free Software Foundation, didn't like the notion of software being controlled by corporations, so he set out to produce a free version of AT&T's Unix based on the principles espoused in his book "The Gnu Manifesto." The result was Gnu, an incomplete copy that languished until Linus Torvalds incorporated much of it in 1991 into the Linux operating system, which today powers so many servers.
Omfg!
So Linus Torvalds incorporated GNU into its Linux operating system?

That is another reason why I don't click on Infoworld links. They write complete crap.

Reply Parent Score: 1

RE[3]: Infoworld
by Kebabbert on Fri 12th Aug 2011 11:30 in reply to "RE[2]: Infoworld"
Kebabbert Member since:
2007-07-27

So Linus Torvalds incorporated GNU into its Linux operating system?

That is another reason why I don't click on Infoworld links. They write complete crap.

Yes, that is questionable. It would be better to say that Linus Torvalds created the first GNU distro. Earlier, there were no kernel for the GNU operating system - so the finish student Linus Torvalds filled in that gap and created the first GNU distro. Then other GNU/Linux distros spawned, of course.



Regarding IBM Mainframes, it always surprises me that they still live, as the IBM Mainframe cpus are much slower than a fast x86 cpu. And the biggest IBM Mainframe z196 have 24 of these slow cpus, and it costs many 10s of million USD. If you have a 8-socket x86 server with Intel Westemere-EX then you have more processing power than the biggest z196 IBM Mainframe. You can emulate IBM Mainframes on you laptop with the open source emulator TurboHercules.

Here is the z196 cpu, which IBM dubbed "Worlds fastest cpu" last year. It has 5.26GHz and almost half a GB of cache (L1+L2+L3), but still it is much slower than a fast x86 cpu. How could IBM fail so miserably with the z196 transistor budget?
http://www-03.ibm.com/press/us/en/pressrelease/32414.wss



And the old COBOL. It's not particularly sexy or hot. It is boring and ugly, only used on old dinosaurs (i.e Mainframes).

Reply Parent Score: 4

RE[2]: Infoworld
by zima on Fri 12th Aug 2011 12:33 in reply to "RE: Infoworld"
zima Member since:
2005-07-06

2. Virtual memory: 1962
A team of researchers at the University of Manchester working on the Atlas project invented a way for computers to recycle memory space as they switched programs and users.

Is this a technology or more of a concept in itself? A bit like "CPU" (or some of its typical basic blocks at least); might kinda apply also to:

4. OLTP: 1964
IBM invented OLTP (online transaction processing) when it created the Sabre airline reservation system for American Airlines. It linked 2,000 terminals (via telephone) to a pair of IBM 5070 computers to handle reservations processing in just seconds. The fundamental OLTP architecture is in use today in everything from e-commerce to, well, airline reservations.

Reply Parent Score: 1

RE[3]: Infoworld
by rcsteiner on Mon 15th Aug 2011 21:21 in reply to "RE[2]: Infoworld"
rcsteiner Member since:
2005-07-12

An OLTP subsystem is a specific environment within a mainframe that is different from the more traditional interactive and batch environments a mainframe also has (and which are conceptually similar to a UNIX shell and shell scripts).

It represents a very specific way of designing, writing, and executing software. Very controlled.

My coding experience is limited to the Sperry and Burroughs flavors of OLTP, not IBM's, but I suspect they are all similar in some elements of their basic

OLTP is an event-driven subsystem running in addition to the base OS which is dedicated to the very fast loading, execution, and termination of small single-purpose programs under the control of a central scheduling service.

This scheduling service is the only thing which really stays resident as such ... it generally parses the first few bytes of each incoming message (often known as a "transaction ID" or "search ID") and uses that information to reference a table which determines which OLTP program should be started to address the issue, often by program number.

One simple transaction program build a screen on the screen when called, then exit ... maybe the current weather, or a flight status display, or just a fill-in mask (form) prompting the user for more information.

Another transaction program might parse a screen that has been resubmitted with bits of data added, and then generate some form of positive or negative response before terminating.

A third might receive a message from another system (no user interface at all), store it in a defined manner, and exit. Or massage it and pass it along to a third system.

A fourth might start when a timer triggers, perform some tasks, and then exit. A lot of housekeeping runs are controlled in that way. Similar to, but predating things like crontab.

Programs don't hang around. Usually. They might be configured to remain resident in memory for speed of access, and multiple copies might be kept in that state to speed things along, but control is never passed to them until the scheduler says so. It's purely a push technology.

Similar subsystems probably exist in UNIX environments somewhere, but I've not seen them. Tuxedo can be a little similar in some respects, since a Tux server tends to push messages to resident Tux services, but that sort of message passing is the only bit that's the same. Tux programs are still run from a standard shell, request memory from the OS, etc., while OLTP programs don't really work that way.

Hard to explain in a short note. Just like it's hard to explain the concept of "freespace files" to someone who only does relational databases. If you only knew how slllooowwww an RDMS is compared to other established systems...

Interestingly, the IBM OLTP environment was developed for American Airlines, while the UNIVAC/Sperry solution was developed for United Airlines and Air Canada. Not sure where MCP transactions started, but I suspect either an airline or a bank.

Edited 2011-08-15 21:27 UTC

Reply Parent Score: 2

RE[2]: Infoworld
by Liquidator on Fri 12th Aug 2011 15:01 in reply to "RE: Infoworld"
Liquidator Member since:
2007-03-04

Thanks, great ;)
These technologies indeed rule! As geek say: "If it ain't broke, don't fix it" ;)

Edited 2011-08-12 15:01 UTC

Reply Parent Score: 2