Just as IBM was posting “future” processor compiler patches in 2019 for what ended up being early POWER10 enablement, they are once again repeating their same compiler enablement technique with sending out “PowerPC future” patches for what is likely to be POWER11. The “PowerPC future” patches sent out today are just like before — complete with mentions like “This feature may or may not be present in any specific future PowerPC processor…Again, these are preliminary patches for a potential future machine. Things will likely change in terms of implementation and usage over time.“ If this is indeed a sign that POWER11 is on its way, I really hope IBM learned from its mistake with POWER10. POWER9 was completely open, top to bottom, which made it possible for Raptor Computing Systems to build completely open source, auditable workstation where every bit of code was open source. POWER10, however, contained closed firmware for the off-chip OMI DRAM bridge and on-chip PPE I/O processor, which meant that the principled team at Raptor resolutely said no to building POWER10 workstations, even though they wanted to. I firmly believe that if IBM tried even the littlest bit, there could be a niche, but fairly stable market for POWER-based workstations, by virtue of being pretty much the only fully open ISA (at least, as far as POWER9 goes). Of course, we’re not talking serious competition to x86 or ARM here, but I’ve seen more than enough interest to enable a select few OEMs to build and sell POWER workstations. Let’s hope POWER11 fixes the firmware mess that is POWER10, so that we can look forward to another line of fully open source workstations.
Writing a cycle-accurate emulator for a computer system is more than just understanding all the CPU instruction timings. A computer is a complete system with peripherals, interrupts, IO bus signals, and DMA. All this comes with an array of different timings and quirks. When software like Area 5150 is written that requires perfect cycle timing, it can be a challenge to provide the level of accuracy needed for the software to function. Area 5150 in particular requires precise coordination with the CGA’s CRTC chip and timer interrupts to begin the end credits demo effect at precisely the right time. It would be very handy then if we could somehow peek into the operation of the system while it was running and understand how all these parts interact. As it turns out, we can! This process is typically referred to as ‘bus sniffing’, and there’s a lot of a technical information out there on the topic in general. Sniffing can be done on everything from ethernet networks to vending machines, and you can even bus sniff your car. This article will specifically discuss sniffing the IBM PC 5150. A very in-depth and technical article, and one that can easily lead to another weekend project.
Magnetic tape drives have long occupied the role that hard drives have shifted toward since the emergence of SSDs – cost-effective cold storage. Although they’re too slow for most users, recent developments allow magnetic drives to carry hundreds of gigabytes per square inch of tape. This week, IBM’s offerings in the space took another step forward. The company’s new TS1170 drive can store 50TB of uncompressed data per tape cartridge using the new JF media type. Employing 3:1 compression expands the capacity to 150TB. The technology represents a 250 percent increase over the TS1160 drive and JE media, which reached 20TB uncompressed and 60TB compressed. Additionally, the TS1170 manages a native data rate of 400 MB/s, increasing to 900 MB/s when handling compressed data. I’ve toyed with the idea of getting a used tape drive so I can use it to back up data – but mostly just to play with the technology. They’re not that expensive on eBay, but there’s quite a few different types and offerings, and it’s difficult to get a grasp on what would be a good option for a tinkerer to go for.
Ars Technica writes: There are hundreds of billions of lines of COBOL code running on production systems worldwide. That’s not ideal for a language over 60 years old and whose primary architects are mostly retired or dead. IBM, eager to keep those legacy functions on its Z mainframe systems, wants that code rewritten in Java. It tried getting humans to do it a few years back, but now it has another idea. Yes, you guessed it: It’s putting AI on the job. The IBM watsonx Code Assistant, slated to be available in Q4 this year, intends to keep humans in the mix, but with a push from generative AI in analyzing, refactoring, and testing the new object-oriented code. It’s not an all-or-nothing process, either, as IBM claims that watsonx-generated code should be interoperable with COBOL and certain Z mainframe functions. This might be one of those cases where using “AI” actually makes sense and can be a meaningful tool for the relatively few COBOL programmers left trying to modernise COBOL codebases. I’m obviously not well-versed enough in any of this to make any objective statements, but it seems to make sense.
The Blue Lightning CPU is an interesting beast. There is not a whole lot of information about what the processor really is, but it can be pieced together from various scraps of information. Around 1990, IBM needed low-power 32-bit processors with good performance for its portable systems, but no one offered such CPUs yet. IBM licensed the 386SX core from Intel and turned it into the IBM 386SLC processor (SLC reportedly stood for “Super Little Chip”). Later on, IBM updated the processor to support 486 instructions. It is worth noting that there were still the SLC variants available—nominally a 486, but with a 16-bit bus. The licensing conditions reportedly prevented IBM from selling the SLC processors on the free market. They were only available in IBM-built systems and always(?) as QFP soldered on a board. A very unique processor from the days Intel licensed others to make x86 chips, even allowing them to improve upon them. Those days are long gone, with only AMD and VIA remaining as companies with an x86 license.
Ars Technica has a great article about the IBM mainframe. Mainframe computers are often seen as ancient machines—practically dinosaurs. But mainframes, which are purpose-built to process enormous amounts of data, are still extremely relevant today. If they’re dinosaurs, they’re T-Rexes, and desktops and server computers are puny mammals to be trodden underfoot. It’s estimated that there are 10,000 mainframes in use today. They’re used almost exclusively by the largest companies in the world, including two-thirds of Fortune 500 companies, 45 of the world’s top 50 banks, eight of the top 10 insurers, seven of the top 10 global retailers, and eight of the top 10 telecommunications companies. And most of those mainframes come from IBM. In this explainer, we’ll look at the IBM mainframe computer—what it is, how it works, and why it’s still going strong after over 50 years. Whenever I see anything about mainframes, I think of that one time an 18 year old decided to buy a mainframe off eBay to run at home, and did an amazing presentation about the experience.
In case you thought IBM AIX had a future, IBM’s legacy proprietary Unix, IBM apparently doesn’t. The Register reported Friday that IBM has moved the entire AIX development group to IBM India, apparently their Bangalore office, and placing 80 US-based developers into “redeployment.” That’s a fairly craven way of replacing layoffs with musical chairs, requiring the displaced developers to either find a new position within the company (possibly relocating as well) within some unspecified period, or retire. About a third of IBM’s global staff is on the Indian subcontinent. IBM didn’t publicly announce this move and while it’s undoubtedly good news for IBM India it seems bad news for AIX’s prospects: the technologies IBM thinks are up and coming IBM tends to spend money on, and so an obvious cost-cutting move suggests IBM doesn’t think AIX is one of those things. The writing’s on the wall for all the remaining commercial UNIX variants. By this point I think most of the work being done on AIX and HP-UX is maintaining the install base and fulfilling support contracts, after which there’s no real reason to keep these platforms going.
Project Monterey was an attempt to unify the fragmented Unix market of the 90s in to a single cross vendor Unix that would run on Intel Itanium (and others). The main collaborators were: IBM who brought its AIX, HP was supposed to bring some bits from HP-UX, Sequent from DYNIX/ptx and SCO from UnixWare. The project shared fate of Itanium – it totally failed. In the end Linux took its spot as a single Unix. The main legacy of Project Monterey was the famous SCO vs IBM lawsuit. IBM did however produce AIX version for IA64 architecture! According to Wikipedia, 32 copies were sold in 2001. Except of course no one has kept a copy and the famous OS was lost forever. Until now! This rare release has been recovered, imaged, and uploaded for posterity. It’s going to be difficult to actually run it, though, as there’s no emulator capable of running it – you’re going to need a very specific type of Itanium machine, an Intel Engineering Sample Itanium workstation, which were available from several vendors.
Say hello to the RISC ThinkPad that’s not a ThinkPad, the IBM WorkPad z50. Let’s say you went to CompUSA, or, I dunno, Fry’s, or Circuit City, in mid-1999. Why, you might pick up an Ethernet hub and a BeOS advanced topics book, and marvel at this lithe little laptop IBM was selling for US$999 ($1780 in today’s dollars) MSRP. It had all the ThinkPad design cues and a surprisingly luxurious 95% keyboard, plus that frisson-inducing bright red mouse stick. And you might say, I want this, and I’m going to take it home. I want one of these so very bad – but like so many things classic computing, eBay prices have gone batshit insane, making it very, very hard to justify.
Can’t get enough of porting old software? How about getting Doom ported to and running on an old version of AIX for PowerPC? You know what ever computer needs? DOOM. Do you know what I couldn’t find? DOOM for the IBM RS/6000, but that’s not surprising. These machines were never meant for gaming, but that’s doesn’t mean you can’t do it. If you like pain anyway. In this extra long NCommander special, we’re going to explore AIX, discuss the RS/6000 Model 150 43p I’m running it on. Throughout this process, I’d explore the trouble in getting bash to build, getting neofetch to work, then the battle for high colors, SDL, and more. This video is over an hour long, but incredibly detailed and lovingly obscure.
IBM has announced it has cleared a major hurdle in its effort to make quantum computing useful: it now has a quantum processor, called Eagle, with 127 functional qubits. This makes it the first company to clear the 100-qubit mark, a milestone that’s interesting because the interactions of that many qubits can’t be simulated using today’s classical computing hardware and algorithms. But what may be more significant is that IBM now has a roadmap that would see it producing the first 1,000-qubit processor in two years. And, according to IBM Director of Research Darío Gil, that’s the point where calculations done with quantum hardware will start being useful. I feel like quantum computing is one of those things that will eventually have a big impact on various aspects of our world, but at this point, it’s far too complicated and early days to really make any predictions.
In today’s era of hybrid cloud, there is an increased demand for flexible infrastructure, continuous availability, scalable and sustainable compute, enhanced security and data protection, and increased integration with open technologies. As businesses navigate these dynamic market conditions and IT infrastructure demands, they require an operating system they can rely on that can be optimized to adapt to these changing business needs. With the introduction of IBM AIX 7.3 Standard Edition, IBM addresses these needs while also continuing its tradition of providing new functions that can help dramatically improve system availability, scalability, performance, and flexibility while maintaining binary compatibility to ensure a quick and seamless transition to the new release. Combined with Power10, AIX 7.3 enables clients to modernize with a frictionless hybrid cloud experience to respond faster to business demands, protect data from core to cloud, and streamline insights and automation. AIX 7.3, coupled with IBM POWER8®, and later, technology-based systems, delivers a computing platform designed for hybrid cloud that is optimized, secure, and adapts to evolving business demands. This means AIX 7.3 has been released – well, sort of, since it won’t be actually available until 10 December.
This is an introduction to getting IBM’s OS/360 operating system loaded and running on the Hercules emulator for the System/370, ESA/390, and z/Architecture systems. It assumes you have some familiarity with the 370, and with OS; in particular, you need to have some understanding of JCL, and of OS/360 (or later versions, like MVS or OS/390) usage and operation. It does not purport to be an introduction to the world of the 370. This is a bit more complicated to set up than just about any other emulator or VM out there. A great weekend project for people with the right skill set and inclination.
While POWER9 was big for open-source fans with the formation of the OpenPOWER Foundation and Raptor Computing Systems designing POWER9-based systems that are fully open-source down to schematics and the motherboard firmware, the same can’t be currently said about POWER10. While IBM has published a lot of the POWER10 firmware as open-source, remaining closed for at least the time being is their off-chip OMI DRAM bridge and their on-chip PPE I/O processor. This sucks. I am a huge fan of Raptor’s fully open POWER9 workstation and boards, and despite Raptor hinting for months now there were issues with POWER10’s openness, I was hoping things would be figured out before the release of IBM’s new POWER10 processors this month. Sadly, this seems to have been wishful thinking. Raptor’s POWER9 workstations are the only fully open performance-oriented computers you can get, and until IBM decides otherwise, it’s going to stay that way. That just sucks.
IBM today announced IBM z/OS V2.5, the next-generation operating system for IBM Z, designed to accelerate client adoption of hybrid cloud and AI and drive application modernization projects. I have several IBM Z mainframes running in my garage running our family’s Minecraft server. This update will surely lead to downtime, which is a major, major bummer, especially since IBM is shoving ever more ads into z/OS to get us to subscribe to IBM Music.
The IBM PC spawned the basic architecture that grew into the dominant Wintel platform we know today. Once heavy, cumbersome and power thirsty, it’s a machine that you can now emulate on a single board with a cheap commodity microcontroller. That’s thanks to work from , who has shared a how-to on Youtube. The full playlist is quite something to watch, showing off a huge number of old-school PC applications and games running on the platform. There’s QBASIC, FreeDOS, Windows 3.0, and yes, of course, Flight Simulator. The latter game was actually considered somewhat of a de facto standard for PC compatibility in the 1980s, so the fact that the ESP32 can run it with code suggests he’s done well. This is excellent work, and while there’s tons of better ways to emulate an old IBM PC, they’re not as cool as running it on a cheap microcontroller.
Recently, popular Apple blogger John Gruber has been on a mission to explain why, exactly, tech companies like Apple don’t need any stricter government oversight or be subjected to stricter rules and regulations. He does so by pointing to technology companies that were once dominant, but have since fallen by the wayside a little bit. His most recent example is IBM, once dominant among computer users, but now a very different company, focused on enterprise, servers, and very high-end computing. Gruber’s argument: It wasn’t too long ago — 20, 25 years? — when a leadership story like this at IBM would have been all anyone in tech talked about for weeks to come. They’ve been diminished not because the government broke them up or curbed their behavior through regulations, but simply because they faded away. It is extremely difficult to become dominant in tech, but it’s just as difficult to stay dominant for longer than a short run. Setting aside the fact that having to dig 40 years into the past of the fast-changing technology industry to find an example of a company losing its dominance among general consumers and try to apply that to vastly different tech industry of today is highly questionable, IBM specifically is an exceptionally terrible example to begin with. I don’t think the average OSNews reader needs a history lesson when it comes to IBM, but for the sake of completeness – IBM developed the IBM Personal Computer in the early ’80s, and it became a massive success. Almost overnight, it became the personal computer, and with IBM opting for a relatively open architecture – especially compared to its competitors at the time – it was inevitable that clones would appear. The first few clones that came onto the market, however, ran into a problem. While IBM opted for an open architecture to foster other companies making software and add-in cards and peripherals, what they most certainly did not want was other companies making computers that were 100% compatible with the IBM Personal Computer. In order to make a 100% IBM compatible, you’d need to have IBM’s BIOS – and IBM wasn’t intent on licensing it to anyone. And so, the first clones that entered the market simply copied IBM’s BIOS hook, line, and sinker, or wrote a new BIOS using IBM’s incredibly detailed manual. Both methods were gross violations of IBM’s copyrights, and as such, IBM successfully sued them out of existence. So, if you want to make an IBM Personal Computer compatible computer, but you can’t use IBM’s own BIOS, and you can’t re-implement IBM’s BIOS using IBM’s detailed manual, what are your options? Well, it turns out there was an option, and the company to figure that out was Compaq. Compaq realised they needed to work around IBM’s copyrights, so they set up a “clean room”. Developers who had never seen IBM’s manuals, and who had never seen the BIOS code, studied how software written for the IBM PC worked, and from that, reverse-engineered a very compatible BIOS (about 95%). Since IBM wasn’t going to just hand over control over their platform that easily, they sued Compaq – and managed to find one among the 9000 copyrights IBM owned that Compaq violated (Compaq ended up buying said copyright from IBM). But IBM wasn’t done quite yet. They realised the clone makers were taking away valuable profits from IBM, and after their Compaq lawsuit largely failed to stop clone makers from clean-room reverse-engineering the BIOS, IBM decided to do something incredibly stupid: they developed an entirely new architecture that was entirely incompatible with the IBM PC: MCA, or the Microchannel Architecure, most famously used in IBM’s PS/2. In the short run, IBM sold a lot of MCA-based machines due to the company’s large market share and dominance, but customers weren’t exactly happy. Software written for MCA-based machines would not work on IBM PC machines, and vice versa; existing investment in IBM PC software and hardware became useless, and investing in MCA would mean leaving behind a large, established customer base. The real problem for IBM, however, came in the long run. Nine of the most prominent clone manufacturers realised the danger MCA could pose, and banded together to turn the IBM PC into a standard not controlled by IBM, the Extended Industry Standard Architecture (with IBM’s PC-AT of the IBM PC renamed to ISA), later superseded by Vesa Local Bus and PCI. Making MCA machines and hardware required paying hefty royalties to IBM, while making EISA/VLB/PCI machines was much cheaper, and didn’t tie you down to a single, large controlling competitor. In the end, we all know what happened – MCA lost out big time, and IBM lost control over the market it helped create entirely. The clone makers and their successful struggle to break it free from IBM’s control has arguably contributed more to the massive amounts of innovation, rapid expansion of the market, and popularity and affordability of computers than anything else in computing history. If the dice of history had come up differently, and IBM had managed to retain or regain control over the IBM PC platform, we would have missed out on one of the biggest computing explosions prior to the arrival of the modern smartphone. To circle back to the beginning of this article – using IBM’s fall from dominance in the market for consumer computers as proof that the market will take care of the abusive tech monopolists of today, at best betrays a deep lack of understanding of history, and at worst is an intentional attempt at misdirection to mislead readers. Yes, IBM lost out in the marketplace because its competitors managed to produce better, faster, and cheaper machines – but the sole reason this competition could even unfold in the first place is because IBM inadvertently lost the control it had over the market. And this illustrates exactly why the abusive tech giants of today need to be strictly controlled, regulated, and possibly even broken up. IBM could only dream of
COBOL for Linux on x86 1.1 is the latest addition to the IBM COBOL compiler family, which includes Enterprise COBOL for z/OS and COBOL for AIX. COBOL for Linux on x86 is a productive and powerful development environment for building and modernizing COBOL applications. It includes an optimizing COBOL compiler and a COBOL runtime library. COBOL for Linux on x86 is based on the same advanced optimization technology as Enterprise COBOL for z/OS. It offers both performance and programming capabilities for developing business critical COBOL applications for Linux on x86 systems. COBOL for Linux on x86 is designed to support clients on their journey to the cloud. It enables clients to strategically deploy business-critical applications written in COBOL to a hybrid cloud environment or best-fit platforms, which includes IBM Z (z/OS), IBM Power Systems (AIX), and x86 (Linux) platforms. As I understand it, there’s still a lot of COBOL code all over the industry, so it makes sense for IBM to make its COBOL technologies available to more people.
By leveraging the strengths of the IBM Z platform’s computing power and resources, IBM z/OS(R) plays an important role in providing a secure, scalable environment for the underlying transformation process on which organizations are embarking to deliver swift innovation. IBM z/OS V2.5 is designed to enable and drive innovative development to support new hybrid cloud and AI business applications. This is accomplished by enabling next-generation systems operators and developers to have easy access and a simplified experience with IBM z/OS, all while relying on the most optimal usage of computing power and resources of IBM Z servers for scale, security, and business continuity. This is far beyond my comfort level.
How do you boot a computer from punch cards when the computer has no operating system and no ROM? To make things worse, this computer requires special metadata called “word marks” that can’t be represented on a card. In this blog post, I describe the interesting hardware and software techniques used in the vintage IBM 1401 computer to load software from a deck of punch cards. (Among other things, half of each card contains loader code that runs as each card is read.) I go through some IBM 1401 machine code in detail, which illustrates the strangeness of the 1401’s architecture and instruction set compared to a modern machine. I simply cannot imagine what wizardry these newfangled computers must’ve felt like to the people of the ’50s, when computers first started to truly cement themselves in the public consciousness. Even though they’ve been around for twice as long, I find a world without cars far, far easier to imagine and grasp than a world without computers.