One interesting aspect of a computer’s instruction set is its addressing modes, how the computer determines the address for a memory access. The Intel 8086 (1978) used the ModR/M byte, a special byte following the opcode, to select the addressing mode. The ModR/M byte has persisted into the modern x86 architecture, so it’s interesting to look at its roots and original implementation. In this post, I look at the hardware and microcode in the 8086 that implements ModR/M and how the 8086 designers fit multiple addressing modes into the 8086’s limited microcode ROM. One technique was a hybrid approach that combined generic microcode with hardware logic that filled in the details for a particular instruction. A second technique was modular microcode, with subroutines for various parts of the task. This is way above my pay grade, but I know quite a few of you love this kind of writing. Very in-depth.
Intel recently announced a big driver update for their Arc GPUs on Windows, because their DirectX 9 performance wasn’t as good as it could have been. Turns out, they’re using code from the open source DXVK which is part of Steam Play Proton. DXVK translates Direct3D 9, Direct3D 10 and Direct3D 11 to Vulkan. Primarily written for Wine, the Windows compatibility layer, which is what Proton is made from (Proton is what the majority of games on Steam Deck run through). However, it also has a Native implementation for Linux and it can be used even on Windows too. So it’s not a big surprise to see this. Heck, even NVIDIA use DXVK for RTX Remix. Windows gamers benefiting from open source technology for gaming on Linux. My my, the turntables!
Intel has officially revealed its Intel On Demand program (opens in new tab) that will activate select accelerators and features of the company’s upcoming Xeon Scalable Sapphire Rapids processor. The new pay-as-you-go program will allow Intel to reduce the number of SKUs it ships while still capitalizing on the technologies it has to offer. Furthermore, its clients will be able to upgrade their machines without replacing actual hardware or offering additional services to their clients. Intel’s upcoming Intel’s 4th Generation Xeon Scalable Sapphire Rapids processors are equipped with various special-purpose accelerators and security technologies that all customers do not need at all times. To offer such end-users additional flexibility regarding investments, Intel will deliver them to buy its CPUs with those capabilities disabled but turn them on if they are needed at some point. The Software Defined Silicon (SDSi) technology will also allow Intel to sell fewer CPU models and then enable its clients or partners to activate certain features if needed (to use them on-prem or offer them as a service). On the one hand, in a perfect world where people and companies are fair, this seems like a great idea – it allows you to buy one processor (or, in the datacentre case, one batch of processors) and then unlock additional features and capabilities as your needs change. Sadly, the world is not perfect and people and companies are not fair, so this is going be ripe for abuse. We all know it.
The 8086 microprocessor is one of the most important chips ever created; it started the x86 architecture that still dominates desktop and server computing today. I’ve been reverse-engineering its circuitry by studying its silicon die. One of the most unusual circuits I found is a “bootstrap driver”, a way to boost internal signals to improve performance. This circuit consists of just three NMOS transistors, amplifying an input signal to produce an output signal, but it doesn’t resemble typical NMOS logic circuits and puzzled me for a long time. Eventually, I stumbled across an explanation: the “bootstrap driver” uses the transistor’s capacitance to boost its voltage. It produces control pulses with higher current and higher voltage than otherwise possible, increasing performance. In this blog post, I’ll attempt to explain how the tricky bootstrap driver circuit works. I don’t fully understand all the details, but I do grasp the main point here. This is quite an ingenious design.
Intel’s highest-end graphics card lineup is approaching its retail launch, and that means we’re getting more answers to crucial market questions of prices, launch dates, performance, and availability. Today, Intel answered more of those A700-series GPU questions, and they’re paired with claims that every card in the Arc A700 series punches back at Nvidia’s 18-month-old RTX 3060. After announcing a $329 price for its A770 GPU earlier this week, Intel clarified it would launch three A700 series products on October 12: The aforementioned Arc A770 for $329, which sports 8GB of GDDR6 memory; an additional Arc A770 Limited Edition for $349, which jumps up to 16GB of GDDR6 at slightly higher memory bandwidth and otherwise sports otherwise identical specs; and the slightly weaker A750 Limited Edition for $289. These are excellent prices, and assuming Intel can deliver enough supply to meet demand, I think I may have found my next GPU. If history is anything to go by, these will have excellent Linux support, but of course, we would be wise to let the enthusiasts iron out the bugs and issues. Six to twelve months after launch, these could be amazing allrounders for a very good price.
Today, Intel introduces a new processor for the essential product space: Intel Processor. The new offering will replace the Intel Pentium and Intel Celeron branding in the 2023 notebook product stack. Those are some old, long-standing brands Intel just put out to pasture. “Intel Processor” will exist next to the Core i product lines as budget processors, just like Pentium and Celeron do today.
All of that makes Arc a lot more serious than Larrabee, Intel’s last effort to break into the dedicated graphics market. Larrabee was canceled late in its development because of delays and disappointing performance, and Arc GPUs are actual things that you can buy (if only in a limited way, for now). But the challenges of entering the GPU market haven’t changed since the late 2000s. Breaking into a mature market is difficult, and experience with integrated GPUs isn’t always applicable to dedicated GPUs with more complex hardware and their own pool of memory. Regardless of the company’s plans for future architectures, Arc’s launch has been messy. And while the company is making some efforts to own those problems, a combination of performance issues, timing, and financial pressures could threaten Arc’s future. There’s a lot of chatter that Intel might axe Arc completely, before it’s really truly out of the gate. I really hope those rumours are wrong or overblown, since the GPU market desperately needs a 3rd serious competitor. I hope Intel takes a breather, and allows the Arc team to be in it for the long haul, so that we as consumers can benefit from more choice in the near future.
In the world of today’s high performance CPUs, major architectural changes don’t happen often. Iterating off a proven base is safer, cheaper, and faster than attempting to massively rework the basics of how a CPU fetches and executes instructions. But more than 20 years ago, things hadn’t settled down yet. Intel made two attempts to replace its solid but aging P6 microarchitecture with something completely different. One was Itanium, which avoided the complexity associated with out-of-order execution and variable length decode to deliver very wide in-order execution. Pentium 4 was the other, and we’ll be taking a look at it in this article. Its microarchitecture, called Netburst, targeted very high clock speeds using a long pipeline. Alongside this key feature, it brought a wide range of innovative architectural features. As we all know, it didn’t quite pan out the way Intel would have liked. But this architecture was an important learning experience for Intel, and was arguably key to the company’s later success. The Pentium 4 era was wild, with insane promises Intel could not fulfill, but at the same time, an are of innovation and progress that would help Intel in later years. Fascinating time.
Intel apologized on Thursday after a letter in which the chip maker said it would avoid products and labor from Xinjiang set off an outcry on Chinese social media, making it the latest American company caught between the world’s two largest economies. The chip maker apologized to its Chinese customers, partners and the public in a Chinese-language statement on Weibo, the popular social media site. The company said that the letter, which had been sent to suppliers, was an effort at expressing its compliance with United States sanctions against Xinjiang, rather than a political stance. Intel following in the footsteps of major US companies supporting genocide – Ford, IBM, Apple, and countless others.
Overall though, it’s no denying that Intel is now in the thick of it, or if I were to argue, the market leader. The nuances of the hybrid architecture are still nascent, so it will take time to discover where benefits will come, especially when we get to the laptop variants of Alder Lake. At a retail price of around $650, the Core i9-12900K ends up being competitive between the two Ryzen 9 processors, each with their good points. The only serious downside for Intel though is cost of switching to DDR5, and users learning Windows 11. That’s not necessarily on Intel, but it’s a few more hoops than we regularly jump through. Competition is amazing.
I was wondering what would be the ultimate upgrade for my 386 motherboard. It has a 386 CPU soldered-in, an unpopulated 386 PGA socket and a socket for either 387 FPU or 486 PGA or (might take a Weitek as well – not quite sure) and even might have a soldered-in 486SX PQFP. Plenty of options… But how about hacking a Pentium in? Nothing about this makes any sense, and yet, it’s just plain awesome.
Well, it’s almost here. It looks like Intel will take the ST crown, although MT is a bit of a different story, and might rely explicitly on the software being used or if the difference in performance is worth the price. The use of the hybrid architecture might be an early pain point, and it will be interesting to see if Thread Director remains resilient to the issues. The bump up to Windows 11 is also another potential rock in the stream, and we’re seeing some teething issues from users, although right now users who are looking to early adopt a new CPU are likely more than ready to adopt a new version of Windows at the same time. The discourse on DDR4 vs DDR5 is one I’ve had for almost a year now. Memory vendors seem ready to start seeding kits to retailers, however the expense over DDR4 is somewhat eyewatering. The general expectation is that DDR5 won’t offer much performance uplift over a good kit of DDR4, or might even be worse. The benefit of DDR5 then at this point is more to start on that DDR5 ladder, where the only way to go is up. This will be Intel’s last DDR4 platform on desktop it seems. Intel is taking a different approach than AMD, and follows more in the footsteps of ARM chips – there’s both performance and efficiency cores, and it’s up to Intel’s and others’ software to make proper use of it. It’s great to see what competition can lead to, and both AMD and Apple have lit a fire under this entire industry.
“Intel Seamless Update” is a forthcoming feature for Intel platforms seemingly first being exposed by their new Linux kernel patches working on the functionality… Intel is working on being able to carry out system firmware updates such as UEFI updates but doing so at run-time and being able to avoid the reboot in the process. Pretty cool, but sadly, it’s only for enterprise machines and upcoming Xeon processors.
In today’s Intel Accelerated event, the company is driving a stake into the ground regarding where it wants to be by 2025. CEO Pat Gelsinger earlier this year stated that Intel would be returning to product leadership in 2025, but hasn’t yet explained how this is coming about – that is until today, where Intel has disclosed its roadmap for its next five generations of process node technology leading to 2025. Intel believes it can follow an aggressive strategy to match and pass its foundry rivals, while at the same time developing new packaging offerings and starting a foundry business for external customers. On top of all this, Intel has renamed its process nodes. Counting Intel out because they’re facing some really tough years is not very smart. I obviously have no idea when they’ll be on top again, but this industry has proven to have its ups and downs for the two major players, and I have little doubt the roles will become reversed again over time.
Our results clearly show that Intel’s performance, while substantial, still trails its main competitor, AMD. In a core-for-core comparison, Intel is slightly slower and a lot more inefficient. The smart money would be to get the AMD processor. However, due to high demand and prioritizing commercial and enterprise contracts, the only parts readily available on retail shelves right now are from Intel. Any user looking to buy or build a PC today has to dodge, duck, dip, dive and dodge their way to find one for sale, and also hope that it is not at a vastly inflated price. The less stressful solution would be to buy Intel, and use Intel’s latest platform in Rocket Lake. This is Intel’s 10nm design backported to 14nm. It’s not great, and lags behind AMD substantially, but with the chip shortage, it’s probably the only processor you can get at a halfway reasonable price for the foreseeable future.
Intel CEO Bob Swan is stepping down from the position on February 15th, the company has announced. He will be replaced by VMware CEO Pat Gelsinger. Swan was named Intel’s permanent CEO two years ago in January 2019. He initially took on the role on an interim basis in June 2018 following the resignation of Intel’s previous CEO Brian Krzanich. They need a Lisa Su.
Today may be Halloween, but what Intel is up to is no trick. Almost a year after showing off their alpha silicon, Intel’s first discrete GPU in over two decades has been released and is now shipping in OEM laptops. The first of several planned products using the DG1 GPU, Intel’s initial outing in their new era of discrete graphics is in the laptop space, where today they are launching their Iris Xe MAX graphics solution. Designed to complement Intel’s Xe-LP integrated graphics in their new Tiger Lake CPUs, Xe MAX will be showing up in thin-and-light laptops as an upgraded graphics option, and with a focus on mobile creation. With AMD stepping up to the plate with their latest high-end cards, it’s very welcome to see Intel attacking the lower end of the market. They have a roadmap to move up, though, so we might very well end up with three graphics card makers to choose from – a luxury we haven’t seen in about twenty years.
The big notebook launch for Intel this year is Tiger Lake, its upcoming 10nm platform designed to pair a new graphics architecture with a nice high frequency for the performance that customers in this space require. Over the past few weeks, we’ve covered the microarchitecture as presented by Intel at its latest Intel Architecture Day 2020, as well as the formal launch of the new platform in early September. The missing piece of the puzzle was actually testing it, to see if it can match the very progressive platform currently offered by AMD’s Ryzen Mobile. Today is that review, with one of Intel’s reference design laptops. AnandTech’s deep dive into Intel’s new platform, which is the first chip to use Intel’s much-improved graphics processor.
In August, Intel ran one of its rare Architecture Days where the company went into some detail about its upcoming Tiger Lake processor. This included target markets, core counts, graphics counts, a look into some of the new acceleration features, and a promise of a product launch later in the year. That product launch is now here, and Intel is providing Tiger Lake with speeds and feeds, providing detail and expected benchmark performance for Intel’s next generation of notebook-class devices. A whole slew of laptops using Tiger Lake processors have also been announced today, as well as something called “Intel Evo“, which is a set of specifications OEMs can adhere to for especially high-end ultrabooks (sadly, Evo is entirely Windows-focused, and zero work has been done for other operating systems, such as Linux).
As part of today’s Intel Architecture Day, Intel is devoting a good bit of its time to talking about the company’s GPU architecture plans. Though not a shy spot for Intel, per-se, the company is still best known for its CPU cores, and the amount of marketing attention they’ve put into the graphics side of their business has always been a bit weaker as a result. But, like so many other things at Intel, times are changing – not only is Intel devoting ever more die real estate to GPUs, but over the next two years they are transitioning into a true third player in the PC GPU space, launching their first new discrete GPU in several generations. As part of Intel’s previously-announced Xe GPU architecture, the company intends to become a top-to-bottom GPU provider. This means offering discrete and integrated GPUs for everything from datacenters and HPC clusters to high-end gaming machines and laptops. This is a massive expansion for a company whom for the last decade has only been offering integrated GPUs, and one that has required a lot of engineering to get here. But, at long last, after a couple of years of talking up Xe and laying out their vision, Xe is about to become a reality for Intel’s customers. While we’ll focus on different Xe-related announcements in separate articles – with this one focusing on Xe-LP – let’s quickly recap the state of Intel’s Xe plans, what’s new as of today, and where Xe-LP fits into the bigger picture. AnandTech dives into the first pillar of Intel’s GPU plans – integrated graphics and entry-level dedicated GPUs. The other two pillars – high-end enthusiast use/datacenter, and HPC – will be covered in other AnandTech articles.