Once the most aggressive users of IT, financial institutions have learned to make do with less. But few can go on cost-cutting indefinitely. Computer- and telecoms-makers could soon be feasting again. In most cases, IT systems have not been touched for more than a decade – “note the antiquated OS/2 operating system that runs many an IBM computer on tellers’ desks”, the Economist reports.
It’s interesting, and not surprising, to read that financial services are not beclouded by IT/media hype. Some of them still use mainframes that are 20 years old. Now, I can only imagine what OS they run on. 😉
Regards,
Mystilleef
Upgrading for ungrading’s sake seems foolish to me.
When I worked in the airline industry less than two years ago, the system I worked on generated flight plans and did a number of other flight-critical calculations used by the pilots, flight dispatchers, and load planners for a major airline.
Was that software written in a modern language? Did it run on a mainstream platform?
No, and no. It was originally written in the late 1960’s and early 1970’s in a mix of assembler and a variant of FORTRAN 66 called FIELDATA FORTRAN V, a version of that language so old that it recognized neither CHARACTER variables nor block IF-THEN-ELSE structures, and it ran on a mainframe platform which is now considered esoteric but which was (and still is) very commonly used in the airline industry — the Unisys 2200 or Clearpath IX, formerly known as the Sperry UNIVAC 1100.
Why is it still in use? Because the software is thoroughly debugged, the compiler and its libraries are efficient and thoroughly debugged, and the hardware and software platform itself is extremely well-suited for doing what it was designed to do — running mission-critical transaction-based software and delivering real-time text-based message traffic.
It could perhaps be ported to a more modern language and a more modern platform, but the time and effort involved would be tremendous, and the payoff would (in my opinion) be minimal.
It is true that the US is lagging behind with regard to online banking, mobile phones, etc — compared with some of the other developed nations. However, the author of this article leaves out an important factor: Context.
Sure, if your country is the size of Florida, it is much easier to have state-of-the-art cell phone networks, banking systems, et al. But, the US economy is MUCH more diverse than that.
Because of the sheer enormity of US geography/economy, it is damn near impossible that we will be able to move as quickly as smaller nations. And it is very unlikely that we will have the consistency of service that some other nations have.
I’m not using this as an excuse, I think we ABSOLUTELY should upgrade much of our infrastructure, but lets be realistic here.
I mean, could DoCoMo (Japans largest Cell Phone Carrier), or Nippon (Japan’s largest banking conglomerate) fair any better in this environment??
-Bryan
Smart guys if you ask me, why pay for the junk that IT companies are pouring out now. It’s all a rip off.
Still working, still secure, when you talk of a few hundred billion a day in transactions that could be lost to a “poorly written” software it makes sense.
…although I was too drunk to full absorb the article, I can clearly state that as an employee of a large financial institution I agree that financial institutions are way behind the curve as far as IT technology is concerned. I mean, damn, my workstation at work is so pathetically ancient that most weeks I just simply refuse to go into work and instead work from home using VPN on my somewhat less ancient home PCs. 2 years ago we were still using OS/2 Warp 3 at work. My work PC now has Windows NT 4.0 Service Pack blah blah blah but it’s worse than OS/2 because NT 4.0 just runs like a glacier on a Pentium 233 non-MMX with 128 MB of friggin’ EDO RAM. And our IT folks can’t get me anything better. The company I work for is run by a bunch of cheapskates who are much more concerned about themselves getting huge bonuses at the end of each year than about investing in the backend of the business. Big surprise there! Our mainframes (which I primarily program for) are also vastly underpowered considering the HUGE amount of account processing we do every day, week, and month. ARGHHHH! I hate my job.
OK, now I need another drink. Must… not… think… about… work…
My work PC now has Windows NT 4.0 Service Pack blah blah blah but it’s worse than OS/2 because NT 4.0 just runs like a glacier on a Pentium 233 non-MMX with 128 MB of friggin’ EDO RAM.
Then you need to get it looked at. I spent about two years from early 1996 happily running NT4 on a Pentium 100 (overclocked to 110) with 40MB (later 64) of RAM. That machine was capable of anything I wanted to do, from word processing to web browsing to writing CDs at 4x while I played Quakeworld, so a box with more than twice the power should be more than capable. Your machine is either broken or grossly misconfigured. Heck, you could run NT4 usably on a 486 if it had enough RAM.
It should be faster than OS/2 as well – the main reason I switched from Warp 3.0 to NT4 (beta at the time) was because NT was faster.
The ironic thing is that our ability to deliver high-quality software is much greater than even a couple years ago. For apps that small teams may write, Agile methodolgies allow projects to be flexible without costing so much money.
Test driven development is also becoming known, while the simpler “unit testing” is probably now a staple.
Java, despite what many say, is definitely a secure language and has so many APIs that people don’t have to buggily write their own until maybe optimization.
And the dotcom fallout leave much better programmers out there. Offshoring is a possibility, but that’s silly since you want local programmers who know financial apps, and you probably want them on-site.
It will be interesting to see how many of these financial institutions currently running OS/2 will be bothered moving to eCommstation instead of Windows.
I /hate/ upgrades for the sake of upgrades. If the computer is usable and people get their work done effiently, who freaking cares how old it is and if it isn’t running Win XP or what not – OS/2 is dandy, efficient, and stable. Do those computers work? Good, leave it alone! I’m typing this from work on a 1 ghz laptop running applications that would be perfectly at home on a much lesser machine, desktop apps don’t need tons of processing. I’m not doing 3D rendering, I usually use office apps and admin apps. *shrugs* I’m sure next year I’ll have yet another new laptop, but do I /need/ it? Er, no…
nasa runs 486’s i believe.. they were upgraded. So it doesn’t suprise me financial instutions would upgrade. BTW the person who said NT is slow needs to check their pc. NT is amazingly fast if it runs on supported hardware. I am a hardware afficiando, i love the greatest and latest. But i can’t afford it. So why would financial instutions move their os and risk loosing data in the migration? especally banks, uptime is an issue. You cant have an hour where their is no db transactions. Of course their could be a backup serv. but it negates the whole purprose to upgrade if it works fine. When it comes to the server/client side, you make your hardware last to the point where you can’t find that piece anymore. Their is no reason why you should upgrade if most of you apps work.
“It is true that the US is lagging behind with regard to online banking, mobile phones, etc — compared with some of the other developed nations. However, the author of this article leaves out an important factor: Context.
Sure, if your country is the size of Florida, it is much easier to have state-of-the-art cell phone networks, banking systems, et al. But, the US economy is MUCH more diverse than that.
Because of the sheer enormity of US geography/economy, it is damn near impossible that we will be able to move as quickly as smaller nations. And it is very unlikely that we will have the consistency of service that some other nations have.
I’m not using this as an excuse, I think we ABSOLUTELY should upgrade much of our infrastructure, but lets be realistic here.
I mean, could DoCoMo (Japans largest Cell Phone Carrier), or Nippon (Japan’s largest banking conglomerate) fair any better in this environment??
-Bryan”
Well Australia is the size of the USA (excluding Alaska) and a population of less than 20 million. Funny thing is our infrastructure runs very well. We only have two large telecoms (one is a near monopoly), two major Airlines and about a dozen banks.
No potholed roads in our major cities, we have a huge electricity surplus, high quality free healthcare etc. Then of course by US political standards we are practically communists .
Well Australia is the size of the USA (excluding Alaska) and a population of less than 20 million. Funny thing is our infrastructure runs very well. We only have two large telecoms (one is a near monopoly), two major Airlines and about a dozen banks.
No potholed roads in our major cities, we have a huge electricity surplus, high quality free healthcare etc. Then of course by US political standards we are practically communists .
Trouble is privatisation and the incumbent Government are doing a lot of damage to these (and our roads are a pretty bad example, really – outside (and in many places inside) the capital cities, they’re atrocious). I’ve been voting for the Liberals for the last four elections, but I think it’s about time we brought Labour back for a few terms so that in ten years we *still* have high quality free healthcare, education and good infrastructure. Some things should be run by government not because they do it better or cheaper, but because doing so guarantees universal access.
Or, as I put it in more succint terms, A Labour Government makes Australia a great place to live, but they need to bring in a Liberal Government every now and then to balance the books.
Like I mentioned before, in a public corporations the shareholders and employees and the least educated (who are lead by hype) push for IT downgrades (in the case of Microsoft).
I have a brother that used to work for a large finacial company. Some of the antiques computer system they ran range from an 8088 with mfm drive handling some task it was ideally suited for to running lotus notes on OS/2 <shiver>. The mainframes they ran were for a good resaon apart from the system being poorly implemented the IBM mainframe was the only machine that was capable of handleing their database needs. Open systems couldn’t touch it.
I think the major problem with places like this and their old hardware is old bastards that work the systems. They spent alot of money getting novell certified…you think they want to learn something new and piss away their cert? Its these people that perpertually inflict the good people of the earth with these old systems.They will go to their deathbed recommending a company stick with OS/2. Companies will realize that it is going to become harder to find some of these relics to manage their antique systems and with that comes a high price in wages and geritol.
If you want a company to upgrade their technology than you will have to force them to do it by publishing articles like this one and bringing up the issue at shareholder meetings. The problem though is that the new business technology out there now is a rip off. The IT companies will force all businesses using their technology to purchase continuous upgrades, that’s what it’s all about. The IT industry is only interested in supplying a format from which they can have total control over the users, and be able to force the businesses to migrate every few years. These IT companies only adopt technology to serve their need for control.
It’s a shame to upgrade, because most people think that the new technology is better. Well it’s much worse for the business upgrading, but it’s better for the IT software company who has laid the trap. The IT software company only has a businesses interest in mind after they have them in handcuffs. It has absolutely nothing to do with computer science, if you want that than your only chance is the old technology but really there is nothing because people are way too slow at the trigger.
Most people don’t think about the future, and that is why we have something like 80% of the countries wealth controlled by 20% of the people.
At some point, as this inequality grows even more dramatic, than civil war will occur, and than people start all over again on equal footing. It is characteristic of a society and before that the concept of private property.
Anyway, these financial institutions deserve a great deal of praise for holding onto existing technology that is still serving it’s purpose. They have found value in their software, that’s something that we don’t have today, or at least it is not a leading priority.
This reads like a friend of a friend story. He admits to having no first hand experience of the situation, but has lots to say about it. To reference another’s experience implies that he has none to offer himself. I’m not sure why he was asking to trust him…
I have had actual experience in large banks, large transportation, and large tele-communications companies using OS/2, Novell and mainframe systems.
Some of the antiques computer system they ran range from an 8088 with mfm drive handling some task it was ideally suited for to running lotus notes on OS/2
What version of OS/2 runs on an 8088??? None of my copies of OS/2 will do that.
The mainframes they ran were for a good resaon apart from the system being poorly implemented the IBM mainframe was the only machine that was capable of handleing their database needs
The fact that no PC-based database can touch the performance of a mainframe database has nothing to do with it???
… old bastards that work the systems…
These (deroggatively referred) SMEs also know to spend time getting the work done rather than spend their time dreaming about new hardware and software.
…spent alot of money getting novell certified…
You say that like it is a bad thing.
They will go to their deathbed recommending a company stick with OS/2
Can you point to the study that shows the advantages of moving away from a functional, reliable and supported system? The comapny will save billions of dollars (estimate ala Microsoft 2003 Commercial) if they don’t have to build a new infrastructure, aquire new hardware, retrain end users, and retrain development staff.
…perpertually inflict the good people of the earth…
What about perpetually inflicting the earth with old hardware, from upgrades, leaching poisonuous and cancer-causing chemicals?
My Mac IISi used to boot OS7.1 in around 3 seconds with a mere 17 meg ram and a run Claris Works probably faster than my current 1GHz/512Meg RAM runs Office2000.
10 years ago I used to use WordPerfect 5.1 on a 486/33SX – the app started too fast to read the splash screen. I’d love to try WP5.1 on my current machine to see how it went.
then why fix it. This is especially true in the finicial world when corp. don’t want to risk software changes or OS changes when the current solutions have NO problems at the risk a new software blunder.
Yeah being a CNE is a bad thing. Mainly becasue CNE’s keep companies in the stone age of computing. don’t get me wrong Novell had its days when it was an awsome solution but that was close to 10 years ago. Back when IPX/SPX over token ring was hot stuff. These people fall into the same group as the UNIX bigots. Their stubborn resistance to change and small mindedness causes the next generation of inovators headaches trying to mate the old with the new. Ever try and get a mainframe database that doesn’t talk ASCII to talk to a modern relational database? or how about getting MS outlook to function with lotus notes properly. But any way thank you for responding and providing an example for others to watch out for. By the way i never said that OS/2 was running on an 8088 there was a “to” there.
“Some of the antique computer systems they ran range from an 8088 with a mfm drive handling some task it was ideally suited for TO running lotus notes on OS/2 <shiver>”
It’s a waste of money for you, me, or my bank to spend money in software that adds nothing we need.
I don’t care if my ATM transactions are handled by fifty gnomes with pencil and paper, the bank would be wasting money to buy new software to do a job that its current setup is doing just fine, thank you.
I suspect the only thing I’d get out of my bank switching from OS/2 to Windows is more downtime and higher fees.
>It should be faster than OS/2 as well – the main reason I switched from Warp 3.0 to NT4 (beta at the time) was because NT was faster.
HOOOO!!! Good Joke or good Troll (no difference) To those of us who have used both on the same hardware, NT truely is a slug. Worse yet, a slug that crapped it’s pants all too often with BSODs and a file system that never could figure out how to quit fragmenting its files (HPFS is almost self healing when it comes to fragmentation, while NTFS needs defragging almost every day when used hard.) NT is/was a bloated cruel joke. I’ve got a 200MHz system at work that has run OS/2 4.0 24/7 since 1996. I’ve gone through 3 systems at work with NT 4.0 and it’s BSoDs, the final machine was an SMP one that had run BeOS for 2 years w/o problems, but NT would BSoD once a day or more. Yeah, let’s blame the hardware for crappy software.
Unfortunately I never got a chance to try OS/2, but I have to chime in that I find it hard to believe ANYTHING could be slower than NT4, it’s the slowest OS I’ve ever used. Windows 2000 is noticeably faster than NT4 on the same hardware. NT4 is truly a dog.
I was talking the unlce, a software engineer (vet), some days ago, and he mentioned how is the hay days they’ll have to book four weeks in advance to make use a Fortran compiler, or any compiler for that matter. He said in those days writing code was like writing a semesters scientific research project. You just can’t afford to screw up. The code was revised, debated upon, scrutinized by as many scientist, engineers and mathematicians before it was submitted for compilation, which was then expensive and scarce.
Fast forward some decades later, today, we have a bunch of hackers and script kiddies who write code right after returning from the pub. They learnt how to write in C from an outdated tutorial somewhere on the net. Compilers are not hard to come by. Compilers are lenient and forgiving on careless coders. When a program fails to compile, the compiler spits out the error, and the hacker hacks his way around the error, often times not evaluating the spiral effect that might have on the program as a whole. The hacker could care less, or doesn’t even understand the consequenses of this.
Then you have companies, like Microsoft who hire individuals whose function it is to make sure that Windows programs compile at all cost. They hack their way around a series of errors just to make the damned program compile. The source code is reviewed buy none but a group of 20 geniuses, who are never wrong. Then they release the software only to release patches hours later (Windows XP anyone?). And the whole cycle continues.
The fact is, the quality of software has continued to degrade. The wasteful society we live in takes a lot for granted. The marketing department decides what features go into a commercial code. Software applications today do more than they are designed to do rather than specialize in a given task. Then we get surprised how antique software could withstand the test of time for over 30 years. Well, I’ve mentioned some of the reasons. I’m sure there are a lot more I’m missing. It’s sad.
-Mystilleef
To those of us who have used both on the same hardware […]
Like me, you mean ?
Worse yet, a slug that crapped it’s pants all too often with BSODs and a file system that never could figure out how to quit fragmenting its files (HPFS is almost self healing when it comes to fragmentation, while NTFS needs defragging almost every day when used hard.)
I’ve never understood the “must defragment NTFS” urban myth. After 7 – 8 years of using NTFS based systems, I’ve never felt a need to defragment any of my hard disks. Even when I have, out of sheer curiousity, I’ve never noticed any performance improvement.
The version of HPFS shipping to most customers wasn’t even 32 bit until Warp 4.0 (and possibly even later – I lost interest in OS/2 not long after switching). NTFS (perhaps less so now, but certainly back then) was superior to HPFS in pretty much every way imaginable.
NT is/was a bloated cruel joke.
Your experience does not agree with mine or any of a number of colleagues who all moved from OS/2 to NT4 at about the same time I did (early 1996, which roughly the same time IBM lost interest in OS/2 except for a small subset of customers). We found NT4 to be faster, more stable and offer better features on the same hardware. Otherwise, well, we wouldn’t have switched.
This is not even getting into things like OS/2’s horribly complex and unreliable installation procedure and the PITA of installing fixpacks. Other notable reasons were:
NTFS
The dreaded OS/2 Single (more accurately, synchronous) Input Queue.
Better caching and memory usage
Software RAID
Software support and compatibility (while OS/2’s DOS compatibility was better, but the only DOS programs we had were games, which OS/2 was never especially good at).
NT was multiuser by design, OS/2 was not (this is getting a bit more into philosophical territory though).
I’ve gone through 3 systems at work with NT 4.0 and it’s BSoDs, the final machine was an SMP one that had run BeOS for 2 years w/o problems, but NT would BSoD once a day or more. Yeah, let’s blame the hardware for crappy software.
I had a grand total of five system crashes using NT4 from early 1996 (that’s including about six months of using a beta version) through to the release of Windows 2000 when I switched to it. Three of those crashes were easily and immediately attributable to hardware and a fourth to a poorly written kernel level driver (big finger to you, McAfee). Only one remains a mystery.
One would expect, if it were the software that was “crappy”, *everyone* would have had similar experiences to you. Yet they have not.
I’ve been working in the IT world as a software developer for just over 15 years.
Having spent time building systems to meet the technology du jour, in a lot of cases it seemed like wasted time.
I am writing code today similar to what I did 15 years ago, but today it seems it takes a lot more hardware and overly complex software to do the exact same thing…it’s counter-productive.
I used to have a 600 mHz PC on my desk; now I have a 2.4 gHz PC on my desk. The SDK and server software I use is more unreliable on the faster speed computer than on the slower one.
I hate vendors that force upgrades on customers. When you think about it, does the latest version of Office really do anything better than Office 6.0 did 10 years ago?
My bank still uses mainframe based systems accessed via OS/2 workstations. It is secure, they don’t have to worry about viruses…it just works. The bank staff get their work done effectively and efficiently, they don’t have to apply security updates to their machines on a daily/weekly basis.
Back in the late 80s, we only did an OS upgrade once every couple of years…now it’s an almost daily exercise.
Perhaps I’m just fed up with the constant upgrade cycles.
BTW, to the person wondering about WordPerfict 5.1 on W2K…it works very well.
If a bank is forced to upgrade to new hardware than it should have a custom platform rather than falling into the vendor trap/lock-in that it being offered currently to the masses.
If there is not technical problems though than it should stay with the current system, even if it is old.
I have used Warp, Merlin and NT 4.0 on a pentium pro 200 with 64 megs of ram and I have no doubt whatsoever that OS/2 Warp was the fastest of the three on that platform. But none were really slow.
I don’t think that this can be debated fairly.
For one thing, OS/2’s trump card is that it requires a minimum of 4MB of RAM, NT 4.0 requires 12MB. Another is that OS/2 will absolutely demolish NT 4.0 (even with performance tuning) using PS/2s. In fact a 66MHz 486 PS/2 will outperform a 133MHz pentium Dell. That is because OS/2 is optimized for the PS/2 (it uses the 32 bit BIOS).
NT 4.0 certainly has advantages that OS/2 user’s were screaming to IBM about (see drsmithy above): long and complex installation and single input queue. To make the system run more optimally and compatibly, you were forced to install third-party software usually from Hobbes.
At a telecommunications company I was working at, they had a choice to upgrade to OS/2 Warp 4 or port everything to Windows. Both IBM an Microsoft had been lobbying upper management on which direction to go. IBM offered a one million dollar deal to upgrade the entire company to Warp 4 and they provided – and they had provided staff who had fixed the compatibility problems between OS2 2.1 and Warp 4 in the company’s main inhouse applications. Microsoft upped the ante – Bill Gates flew in and spoke to the President of the company directly. As a result, after spending 3 billion dollars on porting to Windows, the company was running at 75% functionality and had to maintain a 24 hour vigil to keep the applications from keeling over.
The decision to move from OS/2 to NT 4.0 was not based on a sound technology report. The company did not benefit from this “technology innovation”.
Companies who are reluctant to change for change’s sake are not stubbornly resisting innovation, they are simply making wise business decisions.
I have to agree with you concerning the “technology de jour”. I especially liked the comment “When you think about it, does the latest version of Office really do anything better than Office 6.0 did 10 years ago?”
In honesty, going from Office Professional 4.3 and Office 95 to Office 97 was a disaster for the majority of the production environment in the country. Word 97 was a complete fiasco (although I stayed employed for over three years fixing Word 97 issues.) I still think Microsoft hit its pinnacle with Word 2.0 and has been spiraling downhill ever since. Word 2.0 was a good, stable word processor; most of the “enhancements” and “features” tacked on since then are attempts to turn a good word processor into first a bad desktop publishing system and later a bad Web publishing system.
I have been in the computer industry for over 31 (thirty-one) years. I started on mainframes in the seventies. This was while I was in the Air Force. The system under which we sorked did require advance notice to compile a program as well as advance scheduling to run the program. However, the programs back them worked. The compiler changed twice in a 17 (seventeen) year period.
I went to PCs in early 1983. This was when the Zenith Z-248 was the front-runner and IBM was still thinking about getting into the game. Again, compilers and programs worked. We also used Apple, Commodore, Atari, Sinclair (chiclet keyboards, anyone?), home computers and Burroughs stand-alone mini-computers.
Of course, the Zenith machines were purchased for use in a five-year contract. So too were the IBM PCs in 1984. This time period (five years) on contracts led to stability in products and a comprehensive grounding in software design and testing led to programs that “just worked”.
The advent of buggy software being released did not start until the compiler was prevalent on the individual desktop. This affected the military as will as the commercial sector. Quite simply, anyone who could read a book and hack the compiler syntax could produce programs. The programs were not necessarily good ones; not necessarily well designed ones, but they worked, possibly with some bugs. If a program was slow, the sudden answer was to get a faster machine; not look for bad or sloppy coding sequences. If a program was slow, the answer was to get more memory; not look for inefficient algorithms.
I thoroughly enjoy new technology and I love to play with it. However, it has no place in a production environment until it has been thoroughly tested and debugged. Mistakes cost money; bugs cost money; needless upgrades cost money; and that money can be better spent. Better health care benefits come to mind, along with better (not newer) services to customers.
Enough ranting!! Later,
Mike
I’m not comparing OS/2 to NT on general terms. I can’t say for sure OS/2 was always faster. But on my rig and for things I did, it was always faster than NT. The problem wasn’t memory, as 64MB is quite a bit higher than both OS’s minimum memory requirements. I didn’t use Merlin (OS/2 4.0, I don’t think it was called “Warp 4”) long enough to confidently compare it to NT. However its speed seemed to be in the same ballpark.
I used OS/2 2.x (forgot the x) and Warp on a 8 MB 486 DX2-66 (not PS2.) I hope that 486 didn’t outperform any pentiums. They were (especially Warp) very nice operating systems for their time, but they were so slow that I couldn’t completly get rid of DOS. I don’t understand your comment about 32bit BIOS, since all OS/2 versions bypassed BIOS.