ComputerWorld has an article about OpenVMS which maintains a substantial user base in large organizations, and there’s promise of new interest as it moves to 64-bit Itanium.
ComputerWorld has an article about OpenVMS which maintains a substantial user base in large organizations, and there’s promise of new interest as it moves to 64-bit Itanium.
HP’s strategy is nuts. Cancelling all these itanium products and moving all these OS’s and canceling these OS’s and stuff. HP should have kept compaq seperate from HP and operated it as an owner and cut costs between the 2 divisions. They would have far more marketshare and power if they would have stayed that way.
I bet now HP makes desktops in those crappy compaq factories
it’s now partially ported to Itanium (no?) and on a dead chip (the alpha). the itanium isn’t selling well. i believe AMD64 is selling like 200 times more chips.
i never liked the command line on VMS (DCL). it’s very clunky and counter-intuitive, although it was made to be easier than nix. my attraction to OpenVMS is mainly the security aspects.
Seems like this is OpenVMS’s monthly reminder on osnews. nothing new seems to be happening, the story on albert Einstein Medical Center is old, and the port to Itanium seems to be an ongoing project. The usual, uptime at 99.999% for some companies and the clustering technology over distance is another thing that’s usually mentioned too.
Yeah, the OS is rock solid, but the TCO on ALPHA and VAX hardware is high. Have you ever wondered why this company, if the tech was so great, was bought by Compaq? And why many critics relate Sun’s recent bad times to DEC’s history? Too bad they aren’t truly “Open”VMS. At least Sun is turning the corner.
Maybe next month, we’ll get another “new” story or reminder how OpenVMS is still around, referencing Alber Einstein Medical Center, or Sony or some pharmaceutical company (again).
> my attraction to OpenVMS is mainly the security aspects.
If you want superior security, go with Trusted Solaris, which is way above and beyond anything in VMS land. Solaris can do *everything* better than OpenVMS. It looks like HP is already beating a dead horse with OpenVMS even though the horse was pretty damn good back in the days.
“If you want superior security, go with Trusted Solaris, which is way above and beyond anything in VMS land. Solaris can do *everything* better than OpenVMS. It looks like HP is already beating a dead horse with OpenVMS even though the horse was pretty damn good back in the days.”
Of course we should take your word for it, right… no need to provide any sort of justification, or analysis. If you say so it must be so…
I would say MVS or zOS.
How about 640 CPUs for their z series mainframe in
paraller sysplex?
Beat that
“the itanium isn’t selling well. i believe AMD64 is selling like 200 times more chips.”
Itanium and Opteron target different markets. Itanium competes with POWER and sells quite well. Opteron competes with Xeon.
> I would say MVS or zOS.
> How about 640 CPUs for their z series mainframe in
> paraller sysplex?
> Beat that
Err, IBM mainframe scalability is not all that impressive, actually it is quite unimpressive compared with the best of Unix. “Single box” scalability of mainframe is still at unimpressive 24 CPU’s the last time I checked. On the clustering front mainframes are still behind compared with 1024 CPU’s AIX and HP-UX can support or almost 900 by Solaris. The maximum amount of memory zOS can address is still at measly 64GB. So, yeah, mainframes have got quite a bit of catching up to do with Unix. Unix is always at the top of the heap.
> Itanium competes with POWER and sells quite well
LOL. Itanium is being sold in such miserable numbers that I don’t think there is *any* hope of it ever catching up with SPARC or Power. Sun alone outsells all Itanium shipments out there in just a couple of weeks. I think Itanium is pretty dead now especially after HP scrubbed the entire Itanium workstation product line — it is a pretty good indicator which way things are heading now.
zOS is 64-bit and ca address much more than 64GB ram.
Actually:
D32 has maximum of 256GB per-CPU.
It isn’t UNIX, it works differently and operated differently.
Throughput is amazing over there. Just check it out.
http://www-1.ibm.com/servers/eserver/zseries/z990/features.html
You can run multiple LPARs on it, which UNIX systems only started to get somewhat close to (Sun hasn’t yet, not really)
The amount of hardware supported operations is enormous.
All crypto is in there, Unicode support, you name it.
In short, don’t compare. Isn’t worth it. Hardware isn’t there yet. Neither is software. Not yet.
“LOL. Itanium is being sold in such miserable numbers that I don’t think there is *any* hope of it ever catching up with SPARC or Power. Sun alone outsells all Itanium shipments out”
Relatively few POWER machines are sold *period* and most SPARC chips sold are not the high-end Ultrasparc IV chips comparable with Itanium.
“there in just a couple of weeks. I think Itanium is pretty”
Sure about that?
“dead now especially after HP scrubbed the entire Itanium workstation product line — it is a pretty good indicator which way things are heading now.”
Half of SGI’s server sales are Itanium boxes. The fastest
supercomputer in the world at the moment is an Altix – using Itanium chips.
> Half of SGI’s server sales are Itanium boxes. The fastest
> supercomputer in the world at the moment is an Altix – using Itanium chips.
Yes, SGI is the only company out there that is making some leaving off Itanium, but only because they are targetting a highly specialized HPC market. There is little hope that SGI’s system will ever enter the realm of business computing. Plus SGI shipments are so low nowadays that it makes my eyes water sometimes — you can count all their quarterly shipments on one hand. I’m a long standing fan of Silicon Graphics, but I surely wish they never jumped in bed with Intel and kept on development MIPS instead.
Err, IBM mainframe scalability is not all that impressive, actually it is quite unimpressive compared with the best of Unix. “Single box” scalability of mainframe is still at unimpressive 24 CPU’s the last time I checked. On the clustering front mainframes are still behind compared with 1024 CPU’s AIX and HP-UX can support or almost 900 by Solaris. The maximum amount of memory zOS can address is still at measly 64GB. So, yeah, mainframes have got quite a bit of catching up to do with Unix. Unix is always at the top of the heap.
Ha ha, this is very entertaining! A few years ago, people thought Linux couldn’t scale up past 4 CPUs and would never match Solaris without becoming a heavy, unmaintainable tangle of locks.
Now Linux is poised on the brink of running on 2048 CPUs in a single system (actually it would have already happened at NASA, but we just won’t hear about it for a few months). Far more than Solaris or AIX have ever achieved, and even much greater than they even support in *clustered* operation.
Not to mention double what the old champ (IRIX) used to achieve.
Funny how times change.
Oh, not to mention it is still very approachable in terms of development, and it is still the fastest general purpose OS at basic single threaded operations like syscalls, context switches, page faults, page allocations, fork/exec/exit, network interface packet overhead, etc.
And it runs on by far the most diverse hardware architectures and has the largest out of the box driver support too.
> Ha ha, this is very entertaining! A few years ago, people thought > Linux couldn’t scale up past 4 CPUs and would never match Solaris without becoming a heavy, unmaintainable tangle of locks.
Linux still can’t scale well past 4 CPU’s — nothing changed. The tasks SGI is using Linux for are quite loosely parallel so it doesn’t have much relevance to single box scalability. For that matter, it doesn’t really make much of a difference which OS you use for highly parallel taks, SGI chose Linux because it runs on Itanium (better fp perf) and it is cheap. Solaris, AIX, HP-UX or Irix could have taken this place anytime given the target and funding.
In business computing running Linux on box with more than 4 CPU’s is still a waste because of the lack of multithreading in the kernel — Solaris, AIX, and HP-UP can make a much better use of the available CPU’s. For memory intensive applications that prefer low memory latency (databases, ERP, etc.) and require scalability across many CPU’s there still only Solaris, AIX and HP-UX. Linux *might* catch up may be in a few years time provided there is enough intetrest from ISV’s to do that, but that will require a big time overhaul of Linux kernel and is unlikely to happen.
”
Now Linux is poised on the brink of running on 2048 CPUs in a single system (actually it would have already happened at NASA, but we just won’t hear about it for a few months). Far more than Solaris or AIX have ever achieved, and even much greater than they even support in *clustered* operation.
”
Actually the machine is a cluster of 20 supers, each with a single image for every 512 processors. So the champ, IRIX still remains so since it is the only system out there that has a working 1024 processor single image installed.
In any case, the linux used in those machines has very little to do with our everyday linux, since most of the “magic” is carried out by the hardware itself through the NUMAFlex chipset, and the glue is straight port from the cellular IRIX technology.
Also I really do not like when people make straw men arguments. Most people were not saying that linux could not scale well beyond 4 CPUs… people at that time were stating the fact that linux SMP implementation sucked big donkey balls at that time, no more and no less. The threading model and scalability does still leave a lot to desire to tell you the truth, and a lot of the linux success is based on hack upon hack… which gets the job done… but whether or not it is an optimal solution is more than debatable.
Chances are that very few of the NUMA stuff that SGI has implemented for those machines will trickle down to the general kernel, because to tell you the truth it is not needed.
Actually the machine is a cluster of 20 supers, each with a single image for every 512 processors. So the champ, IRIX still remains so since it is the only system out there that has a working 1024 processor single image installed.
I know. The last 4 machines installed are their new BX2s, with an increased cache coherency domain and I2’s with 9MB cache. They have been hooked together into a single system and booted. Obviously they haven’t made the outcome public yet.
In any case, the linux used in those machines has very little to do with our everyday linux
No, Linux is very important.
since most of the “magic” is carried out by the hardware itself through the NUMAFlex chipset,
Err, this “magic” you speak of is the interconnect, routers, core logic etc. Yes this is the “magical” hardware side of things – next you need scalable software to drive it.
and the glue is straight port from the cellular IRIX technology.
The “glue” is just platform drivers, there is no core IRIX code gone into Linux.
Look, these levels are way past what your average operating system has to handle. They’re way out of Sun’s league. A bottleneck in the timer interrupt will livelock the entire system and cause it to not boot, for example.
Also I really do not like when people make straw men arguments. Most people were not saying that linux could not scale well beyond 4 CPUs… people at that time were stating the fact that linux SMP implementation sucked big donkey balls at that time, no more and no less. The threading model and scalability does still leave a lot to desire to tell you the truth, and a lot of the linux success is based on hack upon hack… which gets the job done… but whether or not it is an optimal solution is more than debatable.
Also I really do not like it when people get so up tight about Linux kicking arse.
Chances are that very few of the NUMA stuff that SGI has implemented for those machines will trickle down to the general kernel, because to tell you the truth it is not needed.
To tell the truth you really don’t have much idea though, do you? Their 512 CPU systems can boot a standard 2.6 kernel just fine. They have very little “NUMA stuff” – the main things are CPU and memory placement tools, accounting and performance tools, and their XSCSI layer (not needed in 2.6 now).
What’s more it can all be downloaded here: http://oss.sgi.com/projects/sgi_propack/
But there is nothing amazing there that somehow makes Linux scale (in the 2.4 kernel they distribue, sure they’ve got things like the O(1) scheduler backported from 2.6).
I’ll give you a references for it:
http://www.sgi.com/company_info/newsroom/press_releases/2004/july/s…
http://www.computerworld.com.au/index.php/id;814048434;relcomp;1
http://www.cio-today.com/story.xhtml?story_id=26082
Exit, IRIX.
I wonder what is going to happen to SGI if Intel cut off the life support on Itanium. With the rather pathetic Itanic volumes at present the future doesn’t look all that rosy — on the low end there is no way Itanic can break any new ground as Opteron is already eating Itanic’s lunch and Itel’s own EMT64 will canibalise the sales even more; on the high end there is a steep uphill battle against Sparc and Power that have very loyal customer bases — with all this I could easily see Itanic shriveling even further and being finally killed off by Intel. Where does this leave SGI? The first Intel flirtation ended in SGI loosing all of its worksation marketshare (I don’t think there is any “Graphics” left in Silicon Graphics any more), is this flirtation going to end SGI as a company? I would certainly hate to see that.
If Intel gives up IA64 for something else then I’m sure that wouldn’t stop SGI moving to another chip. Their really cool NUMAflex stuff isn’t IA64 specific in the slightest (it came from MIPS systems).
However, you neglect the fact that I2 was undisputed king of HPC performance for years until the POWER5. I think it gets nearly double the flops that a high end opteron can on linpack, and its big caches go a long way in helping the interconnect scale. They’ve got systems out there now I think with at least 8 times more memory than the Opteron can even physically address…
No, it may not be making big inroads into the datacentre, but definitely the HPC and super computer centres.
The other thing that’s been forgotten and is quietly festering away is the Alpha team and their IP at work on the IA64 line. It will be very interesting to see what happens with the high end processor market in the next couple of years. I don’t think Itanium is out of the race yet.
You have to remember as well that both Xeon and Itanium are moving towards a common socket and chipset family, so by 2006-2007 it will be possible to plug either an Itanium or a Xeon into your Intel based workstation/server this will help in cut the overall costs of Itanium as it won’t be dependent anymore on custom (read expensive) chipsets.
Currently Itanium Systems make up 80% of SGI sales the remaining 20% are MIPS.
…is it truly ready for the Desktop?
RE: ddd (IP: —.dl.dl.cox.net) – Posted on 2004-11-02 00:52:46
HP desktops, servers and workstations were crap even before the merger; wall to wall proprietary crap, enough to bring a grown man to tears, and as for their support, my sister has a better grip on IT issues, and her main fortรฉ is art!
Want to get a good desktop, workstation or server, grab one from Dell, IBM or SUN. Let HP die a quick and painless death. God only knows why a person would want to purchase HP products; their printers are a ink-jet-clogging nightmare, their handhelds are laughable and their so-called “important UNIX” is getting as much attention as the red headed step child of the family.
Cripes, it sad when some says this, but I have greater respect for Microsoft than I do for HP.
RE: JCS (IP: 68.159.145.—) – Posted on 2004-11-02 05:27:04
Regarding the SPARC IV, SUN seems to take their time to merge new processors into their product line up. The really big zinger will be the UltraSPARC VI+.
With that being said, the latest Itanium that will be released will be priced around $530 in quantities of 1,000, meaning, it is getting to the price range of Opteron. What is going to hold it back is the lack of third party motherboards and availability of processors, by which I meant, the ability for a small OEM to purchase Itanium processors off their distributor, which isn’t possible right now ๐
On and regarding SGI, nothing would please me more than SGI teaming up with SUN, porting Solaris accross to Itanium ๐ that would be one VERY mean machine ๐
>Linux still can’t scale well past 4 CPU’s — nothing changed.
>The tasks SGI is using Linux for are quite loosely parallel
>so it doesn’t have much relevance to single box scalability.
…
>In business computing running Linux on box with more than 4
>CPU’s is still a waste because of the lack of multithreading
>in the kernel
Please shut up. Stop spreading FUD. Come with proof.
I refer you to SGI Altix and its customers, the lkm mailing lists, where 16 way boxes are often tested.)
Please shut up. Stop spreading FUD. Come with proof.
I refer you to SGI Altix and its customers, the lkm mailing lists, where 16 way boxes are often tested.)
The SGI machines use a very heavily customised kernel. The fact is the Linux kernel still only scales ok to around 16 cpu’s, there is NOTHING WRONG with that, considering that the bulk of server sales sit at around 1-4way machines, 8 way machines are beginning to get popular, but there is still reservations held by certain companies about the idea of running their servers using x86 equipment.
With that being said, however, Solaris 10 has bade massive strides in regards to performance on the low end of town, couple that with the release of an 8 way Opteron system, and dual core around the corner, it will be an interesting situation in a few years time where Itanium will fit in.
http://marc.theaimsgroup.com/?l=linux-kernel&m=109933257631941&w=2
The fact is you’re wrong. SGI’s systems do used modified kernels, but they don’t need to do much scalability or performance wise – more like management, performance monitoring, partitioning, etc.
You don’t have a shred of evidence to back up your claim that Linux only scales OK to around 16 CPUs, do you?
“You don’t have a shred of evidence to back up your claim that Linux only scales OK to around 16 CPUs, do you? ”
Linux sans NumaFlex the altix could not scale to the 512p single image system… not the other way around. The HW does in fact a lot and there has been quite a lot of cellular Irix put in the system (whether you actually know what that implies is left to the imagination). And that comes straight from the horse’s mouth, I know plenty of people at SGI and I have in fact seen the machine at Ames.
A lot of the engineers have very little good things to say about being force fed the move from the IRIX threading model to the Linux one.
In any case, I just don’t understand why you are getting your panties all tangled up about these issues… chill out man.
Linux sans NumaFlex the altix could not scale to the 512p single image system… not the other way around. The HW does in fact a lot and there has been quite a lot of cellular Irix put in the system (whether you actually know what that implies is left to the imagination). And that comes straight from the horse’s mouth, I know plenty of people at SGI and I have in fact seen the machine at Ames.
Err ok. No operating system can scale to anything without hardware. But even with theoretical “perfectly scalable” hardware, at the level of 512 CPUs, the software is hugely important. If you don’t understand the issues involved then there is no point continuing.
A lot of the engineers have very little good things to say about being force fed the move from the IRIX threading model to the Linux one.
Wow, was that supposed to be evidence?
In any case, I just don’t understand why you are getting your panties all tangled up about these issues… chill out man.
I’m not. I’ve been stating plain simple facts the whole time. It has been everyone else who have been trying to shoot me down by spouting FUD, making unsubstantiated claims, etc.
No my panties aren’t in a twist at all. I’m quite happy that Linux scales to 512 (soon 2048) CPUs in a single system, making it the most scalable general purpose operating system around.
And you just have to laugh at those idiots who claim Linux doesn’t scale to more than 4 CPUs. Duh let’s see what evidence I’ll belive “512 CPU systems in the world’s fastest supercomputer at NASA”, or “some retard making desperate claims without any evidence to back it up”? Hmm that’s a tough one.
A lot of the engineers have very little good things to say about being force fed the move from the IRIX threading model to the Linux one.
Oh and by the way, this is probably in relation to the 2.4 kernels which SGI are still using in production. Threading in 2.4 is admittedly not very good.
2.6 has a completely overhauled, POSIX compliant threading system.
http://marc.theaimsgroup.com/?l=linux-kernel&m=109933257631941&…
The fact is you’re wrong. SGI’s systems do used modified kernels, but they don’t need to do much scalability or performance wise – more like management, performance monitoring, partitioning, etc.
You don’t have a shred of evidence to back up your claim that Linux only scales OK to around 16 CPUs, do you?
1) Don’t get all emotional, this is an operating system, not politics, the meaning of life or some other emotion endevour. Do us all a favour, keep your cricifix, bible, and book according to Linus at the door before partaking in a debate.
2) Linux only scales well to 16CPUs based on the evidence shown by the OSDL, Linus own words and a few other main kernel developers. I’ve yet to hear *ONE* tout that Linux is so finely grained that it can scale cleanly to 512cpus.
The simple fact is, Linux, like it or not, is still very clunky in the scalability department. You can either accept it like a man and move on, or you can post rude, inconsistant posts trying to claim the moral high ground on the basis that you can yell the loudest.
3) If Linux on these machines were so popular and so scalable, please explain why they’re not over taking SUN in the number of SunFire’s being sold. SunFires still outsell these Itanium machines; things like that simply just don’t happen.
Some people here clearly do not know or care to educate themselves about VMS’s capabilities…
Currently 96 nodes per cluster x 64 processors per box (later 96) = 6000+ processors per cluster
Manageable as a single system image if desired. Hard or soft partitionable.
Clusterable over distance of 800 kilometers out-of-the-box. Greater distances are possible (say New York – Los Angeles or New York – Paris, or Sydney – Perth) if carefully prepared for.
Tandem NSK can actually lose transactions over their long distance clusters (machines in different locations can have inconsistent data), VMS does not.
“If you want superior security, go with Trusted Solaris, which is way above and beyond anything in VMS land. Solaris can do *everything* better than OpenVMS. It looks like HP is already beating a dead horse with OpenVMS even though the horse was pretty damn good back in the days”
Solaris secure? How can any OS that’s a member of the patchof the week club be considered secure? I can count on my Solaris systems for buffer overflow exploits and system crashes. No thatnks.
“i never liked the command line on VMS (DCL). it’s very clunky and counter-intuitive, although it was made to be easier than nix. ”
Waht do you find so counter-intuitive about English verbs? If you want to print you print, if you want to edit you edit, …
And of course you have alternatives. At one time or other csh, Bourne shell, ksh, perl, tcl, … all were and most still are available. But they don’t get used so much because DCL is so much easier.
“…is it truly ready for the Desktop?”
It’s on my desktop, has been for over a decade and will continue to be.
I never worry about email virii, word processor virii, data loss, …
“TCO on ALPHA and VAX hardware is high”
The TCO on the system: hardware and software together is low. Yes, it’s probably getting expensive to maintain old VAX hardware, but why maintain something that slow? We have systems that must stay on VAXen and we just let them keep running. Our hardware maintenance costs on Aphas is tiny. You can replace old Alphas with new Alphas or new Itaniums. You can replace most VAXen with Alpha, Itanium, or Charon-VAX running on new hardware.
We have almost no system admin cost on any of our VMS systems, having just cut it from one full time to one part time position because the one full timer wasn’t busy with several dozen systems in use (no reduction in the number of VMS systems). Meanwhile we have several busy admins for about the same number of UNIX/Linux systems and several more for only about twice as many Windows systems.
What these other systems cost us in admin manpower is far more than the hardware maintenance cost on the old VAXen we do keep on maintenance.
*ROFLMAO* Very entertaining, all these uneducated comments about OpenVMS.
Solaris more secure? Kevin Mitnick night disagree. See the record of his testimony before Congress.
True, VMS’s custodians deserve to be boiled in oil, tarred, feathered, beheaded, drawn and quartered, hung and the remains tortured in perpetuity, but that just means they were seduced by the dark side. If they would just take basic business and marketing courses – even the 001 non-transfer level – at their local community college, they would begin to develop an understanding of all the missed opportunities and lost business they’ve let fall by the wayside for reasons that only their analysts could ever begin to understand.
There’s a reason why there are over 64000 known viruses, worms, trojans, etc. for Micro$lop, thousands for UN*X and almost none for OpenVMS. As one Usenet poster’s .sig indicates, 99% of hackers, crackers and script kiddies prefer other operating systems. DEFcon-9 resulted in VMS being “invited not to return” to the annual hacker’s convention.
In these days of almost weekly announcements of new variants of viruses, worms and trojans, OpenVMS could be the greatest security boon in the history of modern computing. All it needs is for its custodians to eschew their denial and unfounded prejudices, and embrace the ubiquitous processing platform that BG, Inc. has used to hand the world over to the hackers, crackers and script kiddies.
VMS’s security model is unmatched except in those systems that none of us should know about. VMS’s scalability (from the single-CPU desktop to a 96-node, 150-mile radius cluster of 32-CPU GS1280s) is unparallelled outside of the mainframe world. VMS reliability is the stuff of legend: uptime is measured with a calendar, not a stop-watch. Multi-year, even decade-long uptimes are not unheard of.
Another poster in this forum commented about the DCL command line. Interesting that in UN*Xland, “-a” changes meaning depending which program you’re using, while qualifiers like /OUTPUT, /EXCLUDE, /SINCE, and uncounted others mean the same to every program – consistent, clear and understandable, and that doesn’t mention command names like COPY, RENAME, BACKUP, APPEND, EDIT, MOUNT, DISMOUNT, etc. Don’t even get me started on that “intuitive” bit. For my money, GUIs are clumsy, poorly thought out, poorly implemented, man-time and man/machine resource intensive and *ANY*thing BUT intuitive!
If you’d like to educate yourself about VMS before posting something that will embarass you later, see http://www.djesys.com/vms/support/ or use Google Groups to lurk on the comp.os.vms Usenet newsgroup.
“If you want superior security, go with Trusted Solaris, which is way above and beyond anything in VMS land. Solaris can do *everything* better than OpenVMS. It looks like HP is already beating a dead horse with OpenVMS even though the horse was pretty damn good back in the days.”
Of course we should take your word for it, right… no need to provide any sort of justification, or analysis. If you say so it must be so…
The evidence is on Usenet in the Solaris newsgroup comp.security.announce
When considering operating systems, one has to look at the requirements. There are many applications where rock solid security and operation is essential. Mission critical is when the company stops making money is the computer stops – perhaps the comapny is damaged because of lack of service. Do you want the doctor at your bedside in hospital to have to wait for a computer to be booted or fixed before he can see your records? Trusted systems will alway have a place. We have one VMS customer who had an up time (continuous service) of over eight (8) years before they shut the system down for maintenance checks. I believe in OS diversity. OpenVMS is rock solid and the port includes many other products to make a fully complete system for customers with very high expectations.
Hi all.
I’m afraid we are mixing debates.
I have no doubt Solaris, Linux, Unix,… perform very well.
However, it doesn’t detract from VMS. VMS is a very robust and powerful operating system. Those who were forced to abandon VMS have unforgettable memories of their VMS times.
On top of that, VMS deserves to be declared as the only True-Clustering system in the world, with a paramount tool (Distributed Lock Manager) that allows communication among processes that reside in different nodes of the cluster (no need to set up bizarre, ad-hoc communication mechanisms).
Calm down, guys!
The funny thing about an operating system that just keeps running is that nobody notices it. All the attention goes to the OS that is always crashing, has people working round the clock installing patches and generally plays the squeaky wheel. I have customers running applications written in the 1970’s and 1980’s running on MicroVAXes that just keep going. They reboot them only when there is a power failure (which is automatic anyway) and the worst problem we’ve had is the little battery that keeps the time-of-day and reboot defaults stops charging after 15 years and must be replaced from Radio Shack for $20.00.
I get quite depressed thinking of all the sites that spent hundreds of thousands re-writing applications for Unix or Windows, hired additional support staff and now have such an emotional and financial investment in their new platforms that they daren’t admit that they made a mistake. VMS is the original bunny that keeps going and as such it falls off management’s radar by being too damn reliable and too damn cheap to run.
I agree that the DEC,Compaq and HP sales staff need to go back to school and learn how to sell what they have, but more than that they need to tell the world just how long some of their customers have been running their VMS platforms and why.
It all comes down to the numbers and the accountants still don’t get it. I support several OpenVMS shops and the numbers are consistant. 80-90% of their business runs on OpenVMS, but 80-90% of their budget goes to support NT and Unix. Consistant in every shop. It is a good thing we left accountants in charge of our careers, go figuure.
OpenVMS clearly knocks the socks off any other O/S with it’s ability to Cluster large volumes of computers and versatility. Ever try to make major modifications to a large Unix system? Takes ten times longer then with OpenVMS. Unix systems simply are not worth the hassle and effort.
VMS has security designed in not patched on. VMS has several patented security mechanisms.
VMS is not dead – it is being actively developed with lotsa of new features in each new version.
See http://www.hp.com/products1/evolution/alpha_retaintrust/openvms/v82…
You can do all that eBiz stuff on VMS – ask the Swiss Stock Exchange.
VMS has the lowest TCO on midrange systems – see the whitepapers
http://h71000.www7.hp.com/openvms/whitepapers/TCS_2004.pdf
http://h71000.www7.hp.com/openvms/whitepapers/sm_whitepaper.pdf
http://h71000.www7.hp.com/openvms/whitepapers/tco_clusters/TCO_WP_F…
VMS runs on systems with up to 32 CPUs now and will support more in the future. More importantly it scales well with each additional CPU (recent versions are much better at this that previously).
Interesting comments from people who obviously know nothing, or little, about OpenVMS.
OpenVMS was awarded Cool and unhackable at Defcon9, and was not invited back again as no-one could hack into it. Hmm, that was not the case for Trusted Solaris.
The Command line is so easy. At least it doesn’t have stupid names like awk, because that was the initials of the names of the people who wrote it. It has easy to guess commands in an English like language.
You have real clustering with a real shareable file system.
The latest record TPS ran on OpenVMS.
OpenVMS is truly open. It conforms to more standards than most flavours of UNIX. Indeed, the reason why Solaris, HP-UX, and AIX are not called UNIX is that they do not conform to enough standards.
99.999% up time and huge clusters may be old hat, but then again it is difficult to improve on that. Hmm, maybe if you run an OS that only has 95% uptime you can improve it.
It has been proven that OpenVMS has the lowest TCO, by an independant company. I think it was Gartner.
Sun is being likened to DEC, not because of technology but approach to the market. Digital had excellent technology, just didn’t know how to market. That is why Sun is being likened to DEC.
Lastly, Alpha is not dead. There is to be one more revision, and those AMD processors that you think are so wonderful. They licensed the IO bus from…. Alpha!
Just a few corrections to mull over
Siobhan Ellis
OpenVMS Product Manager
“i never liked the command line on VMS (DCL). it’s very clunky and counter-intuitive, although it was made to be easier than nix.”
What, you think the eunuchs command line *is* intuitive? As far as I know, there are no vms commands named after somebody’s dog.
Yes indeed, OpenVMS has been ported to Intel Itanium. At present you can get Itanium test systems with OpenVMS 8.1 for Itanium for a very moderate price. In a few months time the production version OpenVMS 8.2 wil be available for all OpenVMS platforms (Vax Alpha Itanium).
The conversion from Alpha to Itanium went very smoothly, since the present VMS versions were designed to be portable after the port from Vax to Alpha.
Not only is it rock solid, it is also very secure and virtualy impossible to hack if you use the standard security of the o.s. wisely.
And then there are the unsurpassed VMS clusters. No other cluster technology comes close. Garantueed to 96 nodes (but at least 255 nodes are possible), garantueed to span 500 miles (but tested to thousands of miles), and they work. Several wide area VMS clusters survived 9/11. The remote users never noticed anything.
And the TCO is quite good, you need very little staff for VMS. After all hardware is dead cheap these days.
The remark that DCL is clunky and counter-intuitive is very funny. Most DCL commands are in plain English, but can be abbreviated to 3 or 4 characters per command or switch.
Oh Yes, don’t forget. VMS is much younger then Unix, and so security etc. was build in from the start, and not added later with the help of sticky tape.
Wow, its amazing there are so many misconceptions out there.
๐
Lets set the record straight in terms of scalability –
1. OpenVMS officially supports 96 servers of up to 32 64bit processors in each server in clusters that can physically reside in dual datacenters up to 800km apart. That is a total of 3072 processors in a single cluster. Shortly, 64 cpu’s will be supported in a single server, so that will raise the total supported CPU’s in a cluster to 6,144.
2. For recent OpenVMS Cust testimonials, check out:
http://h71000.www7.hp.com/success-stories.html
3. For recent (March 2004) USD$750M OpenVMS HP services win info with one Cust, check out:
http://www.hp.com/hpinfo/newsroom/press/2004/040324a.html
4. For a recent analyst report on the state of UNIX clustering (where OpenVMS is called the “gold standard”), check out: (url will wrap)
http://www.tru64unix.compaq.com/unix/illuminata_dt_unix_research_no…
5. For recent info on OpenVMS and Itanium server status, check out:
http://www.hp.com/products1/evolution/alpha_retaintrust/openvms/
6. And for those that think Linux is much higher security than Windows, for those that use RH, how many are aware of the following security patches that need to be installed (11 security patches released in April alone this year, before they stopped posting to this site):
https://rhn.redhat.com/errata/rh9-errata-security.html
Looking forward to replies.
๐
Regards
Oh, I am sorry…. I think you misspelled your posts title, I believe you meant “marketing check”
At Georgian College, we are quite satisfied running our production systems with Novell, MS Windows, RedHat, Solaris, SCO UNIX, OS/X and OpenVMS. The O/S’s I most often work with is UNIX, Windows and OpenVMS. My home network includes 4 different flavours of UNIX. No OpenVMS, yet.
The complaint I have with OpenVMS is that it is limited with the number of applications developed for it when compared to MS Windows or UNIX. Open Source software is being ported over to OpenVMS, but that is a slow process.
In my opinion, when there is an application or service that is worth running in production, and if it is available for OpenVMS, my choice would be to host it on OpenVMS
I have worked with hardware and O/S’s at a technical level full time since 1988. Over the years, I have and continue to enjoy working on various O/S’s. I have also received very good technical support from many of the manufacturers we deal with. There are some support duds out there too. But I am of the opinion that OpenVMS is the *BEST* O/S I have ever worked with. I am also of the opinion that the most consistant and best technical support I have received over the years have come from DEC and Compaq. In what limited dealings I have had with HP, I have not experienced anything to reduce this rating.
Cheers.
Randy Baker
Systems Administrator
Georgian College of Applied Arts & Technology
Canada
Hi people,
who has used VMS has no dubt: VMS is the best OS.
It’s secure (no virus neither worm can attach it)
It’s simple to use (no strange -abc qualifier)
It’s robust (no need reboot continuously)
…and now on Itanium platform become cheaper and graphical.
Antonio Vigliott
vms user & software developer
Incase your tired of all the old stories about OpenVMS.
OpenVMS – An obsolete operating system still used by hundreds of obsolete companies serving billions of obsolete customers and making huge obsolete profits for their obsolete shareholders. And with newer version every year, still considered obsolete; supports doing e-business securely on the Internet.
Here is a new story for yea….
http://h71000.www7.hp.com/openvms/brochures/wien/index.html
Yes, scalability is an interesting issue.
To set the record straight (having researched the official numbers for a whitepaper):
32 processors per node (current maximum)
96 nodes per OpenVMS cluster (official maximum, can and has been exceeded)
Thus, a OpenVMS cluster, which for most purposes, can be viewed as a single system, can include 2,896 processors.
This is not a good forum for a full discussions of the differences between what OpenVMS clusters are and Uniz, Microsoft, and Linux clusters, but suffice it to say that OpenVMS clusters are fully supported at distances of 500 miles. With small adjustments, user sites have run clusters across even wider distances. OpenVMS clusters allowed several firms with data centers within the World Trade Center complex to continue operations without pause during September 11, 2001.
OpenVMS is rock solid, and implements a very tight security model. When a team took OpenVMS to DefCon 9 the other year, OpenVMS won an award for “cool and unhackable”. My understanding is that is was then banned from the contest as an unfair advantage.
OpenVMS offers the scalability of a mainframe, the protection which allows it to be confiugured to support literally hundreds of different user groups securely on a single system (single node or cluster), and at the same time process an impressive array of realtime processes without skipping a beat.
In all, OpenVMS is a robust, reliable, highly secure system and is an excellent platform for serious development.
– Bob
Calculation error on my scratch pad. It should have read 3,072 processors, not 2,896).
– Bob
The HP availability partnership provides service to meet a 99.99% uptime guarantee for the systems that support the Dow corporate data warehouse.
HP systems
Dow relies on a network of over 2,000 HP OpenVMS systems to support mission-critical process control and process automation at its plants worldwide.
A dual-site cluster of HP OpenVMS-based AlphaServer systems provide the platform for the company’s Oracle-based data warehouse, which supports 7,000 users with real-time queries and reporting on Dow’s supply chain, finance and marketing performance.
I have very good examples of companies moving from VMS to NT or Unix and afterall moving back to VMS. Why? VMS is still the most stable OS ever. Real clustering and fully disaster tolerant if you like . Prove that something like clustering really works on Solaris or Tandem or Intel without a real risk of dataloss without sending in stuff anonymous and just yell something that doesn’t make any sense at all. And yes, OpenVMS’s still alive and kicking and will continue for a long long time.
Rich