Intel on Thursday notified its partners and customers that it would be discontinuing its Itanium 9700-series codenamed Kittson processors, the last Itanium chips on the market. Under their product discontinuance plan, Intel will cease shipments of Itanium CPUs in mid-2021, or a bit over two years from now. The impact to hardware vendors should be minimal – at this point HP Enterprise is the only company still buying the chips – but it nonetheless marks the end of an era for Intel, and their interesting experiment into a non-x86 VLIW-style architecture.
Itanium has a long and troubled history, but it’s always been something that I’ve wanted to experiment and play with. Maybe the definitive discontinuation of the platform will inject some more stock of machines into eBay.
It’s a shame that Alpha and PARISC had to die for Itanium…
No kidding! They ended up taking 2 well designed and popular architecture, and replacing it with a flaming pile of crap.
Good riddance to itanium… The processor architecture that promised it all, and delivered nothing.
In a way, Itanium has been a great success – just its announcements led to discontinuation of most competing CPUs.
(and W8 …the son of Kitt, the talking car? 😛 )
Anyways, I suppose this news might speed up OpenVMS port to x64… ( https://en.wikipedia.org/wiki/OpenVMS “As of 2019, a port to the x86-64 architecture is underway.[6][7]” …maybe it will have second youth?)
Not bloody likely. That port has been on going for basically a decade and they are still nowhere near release.
Plus most of the luster of OpenVMS was the HW/SW combination. By the time they release the OS, if they ever do, most of the remaining interest will have dissipated by then. Maybe for some legacy shops, but there should be little to no interest on OpenVMS for newer applications.
As far as delivering nothing, IA-64 suffered from insufficient software technology. The architecture was fundamentally different from the existing RISC and superscalar CISC architectures. Conceptually there was an enormous amount of potential in that the architecture could execute a large number of instructions in parallel, achieving high throughput without increasing clock frequency, and thus heat. It did so, however, by requiring a very complex optimizing code generator in the compiler that wasn’t really useful to any other architecture so investment into necessary compiler tool chains never happened to allow Itanium to sufficiently achieve it’s potential.
Another problem with Itanium is that a lot of the supposed reduction in complexity from the HW scheduler ended up showing up in the need for larger structures elsewhere, which negated the original idea.
Also in order to perform, Itanium relied heavily on predication which ended up also negating the potential savings in power. In fact, Itaniums ended up being very power hungry AND with low frequency.
MIPS (I mean the desktop chips as found in SGI workstations) died because SGI thought Itanium would be so good it would eclipse MIPS anyway. Seriously.
MIPS as dead long before Itanium.
When SGI released its first Itanium systems in 2003, the MIPS IV architecture they were using was already 9 years old.
MIPS isn’t dead – it’s still being developed today, and is used in new devices all the time. It had it’s peak in the US/Jap with the PS2, but that doesn’t mean it’s dead simply because it’s not in a current console or PC. That was never its main market in the first place.
Considering this is an article about Itanium dying, and the comments are about RISC workstation and server architectures, I think saying MIPS was long dead by the time Itanium came out is appropriate, given the context.
For all intents and purposes MIPS is basically on life support, the main source of revenue is from its IP portfolio. Nobody is really doing anything with MIPS now that ARM has taken over that space.
Huh? SGI starved MIPS of R&D on purpose in anticipation of Itanium.
On purpose? Not even close. SGI was simply not large enough to afford the development for high performance microprocessors.
I don’t think many of you understand the scale of development cost for a modern microprocessor.
And they were right. When itanium came out, MIPS had horrendous price/performance and it was easily surpassed by the IA64 parts. The latest high end MIPS parts were just embarrassing, they basically costed almost an order of magnitude more than an Opteron, while offering half the performance. I think SGI’s main mistake was to not go AMD64, rather than waste time and effort with IA64. Heck even the hypertransport fabric would have been easier to coalesce with SGI’s scale out ccNUMA routers than IA64.
I think that’s only partly true. Sun was faced with the same decision, and went with AMD64. It only bought them a few more years than SGI got.
The real problem was that x86 and AMD64 became very cheap and very powerful. Anything still surviving from the old days (including Itanium and UltraSparc, for at least a tiny bit longer) does so on the back of long term support contracts and different revenue streams. Similarly, ARM is now beginning to take certain server markets away from Intel as the performance becomes adequate and the price (including per-watt) is competitive or better.
Some parts of Alpha lived on in AMD’s x86 CPUs… the EV6 bus and later EV7’s interconnect as the predecessor to Hypertransport were influenced by the Alpha engineers. Alot of Alpha technology made it’s way into AMD’s CPU’s and helped them battle Intel in the 2000’s.
Well, at least some good ideas of it survived. I remember as an undergrad to visit a lab in my university where the guys were demonstrating to us how superior was its performance when compared to Intel offerings.
It was a sad day when I read about its demise.
I wish 64 bits x86 was Alpha. Back in the days It could emulate x86 code faster than a native x86 chip.
Alpha and PARISC were both going to die anyways but for different reasons. Their success from a business perspective was largely dependent on vertical integration and vendor lock-in of the commercial UNIX world. HP originally developed the Itanium architecture as a successor to PA-RISC and then jointly developed with Intel. It made sense to get out of the chip fab business that they weren’t very good at (even tough the parisc design was good the production yields were terrible) and partner with a CPU design and manufacturing powerhouse like Intel. So Itanium was always intended to be the successor to PA-RISC. Alphas faced a similar manufacturing issue in that they had such a large die that they typically had a production yield of < 25% making them incredibly costly to produce. So when HP bought Compaq (formerly DEC) it made good business sense to focus their UNIX efforts on a single platform, thus the Alpha was dead as there was no longer a business case to keep it around and PA-RISC was dead because from the beginning they designed Itanium to replace it..
There really isn’t much to experiment with Itanium, unless you’re writing Assembly yourself. If not, it’s just a big box capable of running a very limited selection of operating systems, most of them obsolete by now. I’m all for esoteric hardware, but this is one of those cases where I think no one will miss it.
I have an RX8640 that i got off of ebay for like 250 euros, fully populated. 8 sockets, 16 cores, 128 gigs of ram.
It runs gentoo. Works just fine. Just can’t run it very often, because it needs just shy of 4kw to run. With the price of power in my country, it costs me about 28 to 30 euros to run it for 24 hours.
Maybe run it in winter and turn off the heating to compensate. All of that 4KW (minus the minuscule energy applied to entropy reduction) comes out as heat.
Thom Holwerda,
Same here. The only thing that kept me from getting one was the cost (and it was a pretty substantial obstacle) . Had it been priced more in line with other consumer chips, I would have had one. Obviously intel chose to target the enterprise market exclusively, but that might have been a bad move. Who knows, if there had been enough developers and users like us, it might have made a difference in the market. As is though it just wasn’t even reasonably priced.
The funny thing is that while itanium is the subject of many jokes, I’d say it was actually extremely early for it’s time as a GPGPU of sorts before that was a thing. Nobody had software to run on it and it essentially became a very expensive and slow x86 emulator for large corporations, but if it had been (much) more accessible to the public, I genuinely think it would have been one of those platforms with more potential than people realized. What they desperately needed was a demo scene to push the software innovation and really show what VLIW was capable of. But in the corporate world that just never happened, software innovators never took the helm, and it became a marketing failure.
AMD won with it’s 64bit x86 processor, but mostly by looking backwards and emphasizing strong compatibility with legacy software/toolchains. What’s the moral of the story? Maybe it’s that it’s very hard to overthrow established monopolies head on even when you helped create that monopoly. Instead you need to find the gaps and fill it like nvidia and ARM have done.
Perhaps people should also consider the SuperH architecture, that despite being quite old, was quite clever : https://en.wikipedia.org/wiki/SuperH#J_Core -> http://j-core.org/
Kochise,
Perhaps we should, it looks like it could be interesting and I’d love to see more CPU competition. Alas I find I have less time to fiddle with things these days. Unless I’m being paid in some way or other, I find it hard to justify learning something else just for the sake of it,. Right now I’m placing my bets on cuda/opencl, not necessarily because it’s the best technology, but because it’s a defacto standard of sorts.
But, apart from your beefy PC with a beefy graphics card, perhaps also get some CUDA/OpenCL-capable SBC (hm, I’m getting here relatively often ads for Toradex system-on-module with some Nvidia ARM SoC, announcing its CUDA capability), to get accustomed to lower levels of performance more likely available in most robotics or IoT scenarios. 😛
zima,
I haven’t found any SBC solutions that fit my requirements yet, however they are getting better by the year and so I’m hoping one day I’ll find something that checks all the boxes.
1. 2GB of ram (ideally 4GB), pretty easy to find now days.
2.. 4-8cores, pretty easy to find however many of these don’t have adequate cooling and end up severely throttled. I have yet to find a production ready SBC kit that takes this problem seriously without having to resort to custom mods.
3. It needs enough USB host adapters to hook up two or three web cams – the catch is that most SBC computers only have one host adapter with a multiport USB hub attached, which means the ports all share the same bandwidth and are unsuitable. USB3 might be feasible with an external USB3 hub, but this is less common and less desirable than having built in ports.
4. Wifi – this can be a USB dongle, but if so then the SBC needs to have an additional USB host adapter for it.
5. OpenCL/GPGPU (hardware accellerated)- this would be very nice, but I’ve given up on it with the SBC ARM computers I’ve purchased so far. Too often the only way to get GPU acceleration is with proprietary android kernels. 🙁
6. Mainline kernel – this is so useful to be able to build your own kernel. I take this for granted with x86, but with ARM I get very frustrated at being stuck with an unsupported proprietary kernel and not being able to replace it. This is a huge (albeit artificial) advantage for x86. Many ARM manufactures don’t care enough to fix this.
7. GPIO – the OS needs to support the input/output pins (ideally at 5V with PWM/etc), but if there’s a serial port then a microcontroller can be used for GPIO instead.
I will keep looking with interest at the ARM SBC market, but I feel the practical issues are holding them back. I can see myself purchasing more this year if they offer improvement over what I’ve already got, so if you have any recommendations I’d like to hear them.
@Alfman : I provided you with some links to powerful ARM boards in another thread, but they got filtered out behind a “Access denied” page
Kochise,
I had the exact same issue the other day and I assume wordpress is flagging our posts as spam. If you log out then, it won’t even let you log in any more. I even tried a different IP, but it turned out to be an account lock rather than an IP lock. I was about to email osnews but the access denied page had an ask us to unlock form which ended up working so I got back in.
@post by Alfman2019-02-03 11:18 pm
Granted, there are issues with currently available SBCs …but that’s the reality we have to live with, those issues _are_ encountered in production scenarios, too.
The only thing that worries me about J-Core is that it seems to have gone quiet. According to their talks and their timeline they should be well into replicating the SH-4 (when the architecture gets really interesting), but alas it appears not.
Small projects often go quiet when there’s nothing to talk about as they don’t have a dedicated branch whose only job is to make news whether there’s anything new or not. I’d guess they’re busy bug-fixing the first implementation for SH4, and until it’s working, we won’t hear a peep. Same way we didn’t hear a peep until they had SH2 working.
The fact they don’t post updates on their website isn’t that much of a worry, but the fact their mailing list has died as well is more worrisome.
Yes but they have missed projected milestones without a peep, and as @andre has pointed out the mailing list is dead. There hasn’t been a post there in a year. The IRC channel (at least the last time I checked) was likewise a ghost town.
The moral of the story is that power requirements took over once data center and mobile became the mainstream. And it turns out that Itanium had horrible power consumption issues, mainly due to things inherent to the architecture like predication. So IA64 became a no go in mobile, like at all, and wasn’t very attractive for large scale data center deployment either. So its market was reduced to some very specialized high availability servers, which is a ridiculously small slice of the market and just not big enough to justify it’s further development.
IA64 was basically a dead end.
Regarding Itanium in the Netherlands, note the date of the article:
https://www.spoorpro.nl/spoorbouw/2015/04/02/prorail-vernieuwt-ict-op-verkeersleidingsposten/
I will miss Itanium. I really liked it. It had some neat concepts when it was developed.
Most of the really good stuff found its way into Xeon so it isn’t lost. Look at those AVX instructions with predicates. And the high reliability RAS features developed for Itanium went into Xeon a while ago. And there’s the FMA (Fused Multiply Add) that was first implemented by Intel for Itanium.
I also think Intel could have developed a version of Itanium immune to all of the Spectre bugs. Remove hyperthreading and build a chip with LOTS of smaller, simplified cores. Since Itanium doesn’t require speculative execution it wouldn’t need all of these hack fixes x86 has been requiring. I bet you could fit one or two extra Itanium cores into each x86 core’s instruction decoder, register rename and speculative execution silicon.
Zan Lynx,
+1
That’s an insightful point. I wasn’t 100% sure about this so I looked it up:
https://secure64.com/not-vulnerable-intel-itanium-secure64-sourcet/
That’s a big advantage for VLIW architectures. Scaling up with explicit parallelism rather than speculative parallelism simplifies CPU design and yields less risk in terms of accidental data leaks. Now that it’s better understood that speculation is dangerous, it makes even a bigger case for VLIW architectures (not that this going to budge the market in the least).
What about Larrabee ? Epiphany-III ? Parallax Propeller ?
Kochise,
This?
https://en.wikipedia.org/wiki/Larrabee_(microarchitecture)
The next one looks like it could be interesting and could prove useful.
http://www.adapteva.com/products/e16g301/
https://www.parallella.org/buy/
Between the two official suppliers: they’re out of stock on amazon and digikey has them marked up quite a bit over parallella’s retail price. I also worry about some of the reviews…
The Parallax Propeller seems to be more of a microcontroller solution, I’ll stick with the atmega and arduinos given that I’ve already invested my time & skills into that ecosystem 🙂
There are so many SBCs these days that I’m thinking of making a website to keep track of all of them because it’s so hard to know everything out there.
Larrabee was essentially the proof-of-concept for the entire Xeon Phi line. It was still a super-scalar architecture and essentially a basic Pentium core with the addition of big (512-bit) SIMD instructions and 4-way SMT. Knight’s Corner, the first “real product” from it was still that same basic architecture. It’s successor, Knight’s Landing, and last of the product line (Knight’s Mill was a thing but just barely and nobody used it), was still the same basic core design just based on a newer Atom core than the odler Pentium; still x86 superscalar with wide SIMD and 4-way SMT. They did a lot with on-chip fabric and memory architecture though.
>> Look at those AVX instructions with predicates.
AVX-512 has predicates and inherited it from Larrabee.
AVX works like any other SIMD extension – SSE, VMX, Neon.
>>And there’s the FMA (Fused Multiply Add) that was first implemented by Intel for Itanium.
Intel was late to the party where everyone has FMA.
>>Since Itanium doesn’t require speculative execution it wouldn’t need all of these hack fixes x86 has been requiring
Itanium can do speculative loads. But every deeply pipelined thing is speculative anyway. Even Cortex-A8.
viton,
I’m not deeply familiar with the itanium architecture specifically, but I suspect whether or not “speculative loads” carry a security risk depends on whether it’s based on hidden dynamic hardware state (like it is on x86). For example, the CPU could statically analyze the code stream ahead and do a speculative load in anticipation of it’s use, as long as the CPU doesn’t reveal instruction branching history through side channels, then it shouldn’t be vulnerable to spectre.
I agree. The specter flaw has been a disaster for modern CPU design. I’m convinced that if this risk had been better understood two decades ago, the outcome could have been more favorable for CPUs with explicit parallelism over the deeply pipelined speculative sequential processors that we got.
Speculative loads -> Spectre-like scenario
https://blogs.msdn.microsoft.com/oldnewthing/20150804-00/?p=91181
Advanced loads
https://blogs.msdn.microsoft.com/oldnewthing/20150805-00/?p=91171
” I bet you could fit one or two extra Itanium cores into each x86 core’s instruction decoder, register rename and speculative execution silicon.”
The overhead of the out-of-order execution vs EPIC is quite comparable, since the Itanium had a gigantic register file itself, for example.