Today may be Halloween, but what Intel is up to is no trick. Almost a year after showing off their alpha silicon, Intel’s first discrete GPU in over two decades has been released and is now shipping in OEM laptops. The first of several planned products using the DG1 GPU, Intel’s initial outing in their new era of discrete graphics is in the laptop space, where today they are launching their Iris Xe MAX graphics solution. Designed to complement Intel’s Xe-LP integrated graphics in their new Tiger Lake CPUs, Xe MAX will be showing up in thin-and-light laptops as an upgraded graphics option, and with a focus on mobile creation. With AMD stepping up to the plate with their latest high-end cards, it’s very welcome to see Intel attacking the lower end of the market. They have a roadmap to move up, though, so we might very well end up with three graphics card makers to choose from – a luxury we haven’t seen in about twenty years.
The big notebook launch for Intel this year is Tiger Lake, its upcoming 10nm platform designed to pair a new graphics architecture with a nice high frequency for the performance that customers in this space require. Over the past few weeks, we’ve covered the microarchitecture as presented by Intel at its latest Intel Architecture Day 2020, as well as the formal launch of the new platform in early September. The missing piece of the puzzle was actually testing it, to see if it can match the very progressive platform currently offered by AMD’s Ryzen Mobile. Today is that review, with one of Intel’s reference design laptops. AnandTech’s deep dive into Intel’s new platform, which is the first chip to use Intel’s much-improved graphics processor.
In August, Intel ran one of its rare Architecture Days where the company went into some detail about its upcoming Tiger Lake processor. This included target markets, core counts, graphics counts, a look into some of the new acceleration features, and a promise of a product launch later in the year. That product launch is now here, and Intel is providing Tiger Lake with speeds and feeds, providing detail and expected benchmark performance for Intel’s next generation of notebook-class devices. A whole slew of laptops using Tiger Lake processors have also been announced today, as well as something called “Intel Evo“, which is a set of specifications OEMs can adhere to for especially high-end ultrabooks (sadly, Evo is entirely Windows-focused, and zero work has been done for other operating systems, such as Linux).
As part of today’s Intel Architecture Day, Intel is devoting a good bit of its time to talking about the company’s GPU architecture plans. Though not a shy spot for Intel, per-se, the company is still best known for its CPU cores, and the amount of marketing attention they’ve put into the graphics side of their business has always been a bit weaker as a result. But, like so many other things at Intel, times are changing – not only is Intel devoting ever more die real estate to GPUs, but over the next two years they are transitioning into a true third player in the PC GPU space, launching their first new discrete GPU in several generations. As part of Intel’s previously-announced Xe GPU architecture, the company intends to become a top-to-bottom GPU provider. This means offering discrete and integrated GPUs for everything from datacenters and HPC clusters to high-end gaming machines and laptops. This is a massive expansion for a company whom for the last decade has only been offering integrated GPUs, and one that has required a lot of engineering to get here. But, at long last, after a couple of years of talking up Xe and laying out their vision, Xe is about to become a reality for Intel’s customers. While we’ll focus on different Xe-related announcements in separate articles – with this one focusing on Xe-LP – let’s quickly recap the state of Intel’s Xe plans, what’s new as of today, and where Xe-LP fits into the bigger picture. AnandTech dives into the first pillar of Intel’s GPU plans – integrated graphics and entry-level dedicated GPUs. The other two pillars – high-end enthusiast use/datacenter, and HPC – will be covered in other AnandTech articles.
For a while now Intel has been quietly been working on “mOS” as the “multi-OS” that is a modified version of the Linux kernel that in turn is running lightweight kernels for high-performance computing purposes. Intel mOS has been seldom talked about (or incredibly rare, based on public searches) as it’s still largely a research project but showing much potential in the area of high performance computing for delivering better scalability and reliability of HPC workloads. In fact, mOS can already be used on some supercomputers like ASCI Red, IBM Blue Gene, and others. I indeed had never heard of this project before. Interesting.
Intel’s Chief Engineering Officer Murthy Renduchintala is departing, part of a move in which a key technology unit will be separated into five teams, the chipmaker said on Monday. Intel said it is reorganizing its technology, systems architecture and client group. Its new leaders will report directly to Chief Executive Officer Bob Swan. Ann Kelleher, a 24-year Intel veteran, will lead development of 7-nanometer and 5-nanometer chip technology processes. Last week, the company had said the smaller, faster 7-nanometer chipmaking technology was six months behind schedule and it would have to rely more on outside chipmakers to keep its products competitive. Heads were going to roll eventually after so many years of 10 nm and now 7 nm delays. Intel is in a very rough spot.
Intel announced today in its Q2 2020 earnings release that it has now delayed the rollout of its 7nm CPUs by six months relative to its previously-planned release date, undoubtedly resulting in wide-ranging delays to the company’s roadmaps. Intel’s press release also says that yields for its 7nm process are now twelve months behind the company’s internal targets, meaning the company isn’t currently on track to produce its 7nm process in an economically viable way. The company now says its 7nm CPUs will not debut on the market until late 2022 or early 2023. Intel is in big trouble.
For the past two years, modern CPUs—particularly those made by Intel—have been under siege by an unending series of attacks that make it possible for highly skilled attackers to pluck passwords, encryption keys, and other secrets out of silicon-resident memory. On Tuesday, two separate academic teams disclosed two new and distinctive exploits that pierce Intel’s Software Guard eXtension, by far the most sensitive region of the company’s processors. The new SGX attacks are known as SGAxe and CrossTalk. Both break into the fortified CPU region using separate side-channel attacks, a class of hack that infers sensitive data by measuring timing differences, power consumption, electromagnetic radiation, sound, or other information from the systems that store it. The assumptions for both attacks are roughly the same. An attacker has already broken the security of the target machine through a software exploit or a malicious virtual machine that compromises the integrity of the system. While that’s a tall bar, it’s precisely the scenario that SGX is supposed to defend against. Is this ever going to stop?
Over the past 12 months, Intel has slowly started to disclose information about its first hybrid x86 platform, Lakefield. This new processor combines one ‘big’ CPU core with four ‘small’ CPU cores, along with a hefty chunk of graphics, with Intel setting out to deliver a new computing form factor. Highlights for this processor include its small footprint, due to new 3D stacking ‘Foveros’ technology, as well as its low standby SoC power, as low as 2.5 mW, which Intel states is 91% lower than previous low power Intel processors. This is Intel’s latest attempt to take on ARM in very thin laptops and tablets and other low-power devices.
One thing that Intel has learned through the successive years of the reiterating the Skylake microarchitecture on the same process but with more cores has been optimization – the ability to squeeze as many drops out of a given manufacturing node and architecture as is physically possible, and still come out with a high-performing product when the main competitor is offering similar performance at a much lower power. Intel has pushed Comet Lake and its 14nm process to new heights, and in many cases, achieving top results in a lot of our benchmarks, at the expense of power. There’s something to be said for having the best gaming CPU on the market, something which Intel seems to have readily achieved here when considering gaming in isolation, though now Intel has to deal with the messaging around the power consumption, similar how AMD had to do in the Vishera days. Intel has been able to eek some god performance out of these processors, but all at the expense of power consumption.
Usually, x86 tutorials don’t spend much time explaining the historical perspective of design and naming decisions. When learning x86 assembly, you’re usually told something along the lines: Here’s EAX. It’s a register. Use it. So, what exactly do those letters stand for? E–A–X. I’m afraid there’s no short answer! We’ll have to go back to 1972… I love digital archeology.
We don’t often talk about power supplies, but Intel’s new ATX12VO spec—that’s an ‘O’ for ‘Oscar,’ not a zero—will start appearing soon in pre-built PCs from OEMs and system integrators, and it represents a major change in PSU design. The ATX12VO spec removes voltage rails from the power supply, all in a bid to improve efficiency standards on the PC and meet stringent government regulations. But while the spec essentially removes +3.3-volt, +5-volt and -12-volt and +5-volt standby power from the PSU, they aren’t going away—they’re just moving to the motherboard. That’s the other big change, so keep reading to find out more. Power supplies are definitely one of the more cumbersome parts of a modern PC build, so any changes there can potentially have a big impact. The new Mac Pro has really shown how a modern PC can be designed to not use ugly and annoying cabling, opting instead for various pogo pins and properly aligned connectors. Sure, that would be much harder to accomplish in the open ecosystem of PCs, but for an easier building experience and thus potential access to a larger segment of the market, players in the PC industry would do well to come together and take a long, hard look at the Mac Pro and how to replicate some of its innovations into the wider PC industry.
Today, you can upgrade a desktop PC’s gaming performance just by plugging in a new graphics card. What if you could do the same exact thing with everything else in that computer — slotting in a cartridge with a new CPU, memory, storage, even a new motherboard and set of ports? What if that new “everything else in your computer” card meant you could build an upgradable desktop gaming PC far smaller than you’d ever built or bought before? Last week, I visited Intel’s headquarters in Santa Clara, California so I could stop imagining and check out the NUC 9 Extreme for myself. The linked The Verge article is a decent overview, but for more information, I’d suggest watching Gamers Nexus’ Stephen Burke’s (praise be upon Him) video, to get an even better idea of what Intel is trying to do here. It’s certainly a very fascinating product, and I’m very happy to finally see a major player trying to do something new to combine small form factors with easy expandability and upgradeability. I still have many questions, though, most importantly about just how open this platform – if it even is a platform to begin with – really is. The bridge board that the processor PCIe card and GPU slot into looks quite basic, and there already seem to be multiple variants of said board from different manufacturers, so I hope AMD could just as easily build a competing module. If not, buying into this platform would tie you down to Intel, which, at this point in time, might not be the optimal choice.
Intel has dominated the CPU game for decades, and at CES 2020, the company officially announced its first discrete GPU, the codenamed “DG1”, marking a big step forward for Intel’s computing ambitions. There were almost no details provided on the DG1, but Intel did showcase a live demo of Destiny 2 running on the GPU. Rumors from Tom’s Hardware indicate that the DG1 is based on the Xe architecture, the same graphics architecture that will power Intel’s integrated graphics on the upcoming 10nm Tiger Lake chips that it also previewed at its CES keynote. The market for discrete GPUs is in desperate need of a shake-up, especially at the higher end. Nvidia has had this market to itself for a long time, and it’s showing in pricing.
Here’s a motherboard Intel very quickly wanted to forget about. It’s the Intel CC820—or Cape Cod—desktop board, a product that was late to market (not unusual) and within a few months, the subject of a recall (quite unusual). As the CC820 designation suggests, the board was built on the ill-fated Intel 820 ‘Camino’ chipset. Fascinating story.
While Intel has been discussing a lot about its mainstream Core microarchitecture, it can become easy to forget that its lower power Atom designs are still prevalent in many commercial verticals. Last year at Intel’s Architecture Summit, the company unveiled an extended roadmap showing the next three generations of Atom following Goldmont Plus: Tremont, Gracemont, and ‘Future Mont’. Tremont is set to be launched this year, coming first in a low powered hybrid x86 design called Lakefield for notebooks, and using a new stacking technology called Foveros built on 10+ nm. At the Linley Processor Conference today, Intel unveiled more about the microarchitecture behind Tremont. AnandTech takes a look at Intel’s upcoming Atom processors, the processor family mostly reserved for lower-end devices and specific markets such as embedded platforms and even some smartphones. Most of us, however, will remember Atom processors best from the netbook craze, where they enabled small, cheap Windows and Linux laptops to be sold in droves.
Way back at CES 2014, Razer’s CEO introduced a revolutionary concept design for a PC that had one main backplane and users could insert a CPU, GPU, power supply, storage, and anything else in a modular fashion. Fast forward to 2020, and Intel is aiming to make this idea a reality. Today at a fairly low-key event in London, Intel’s Ed Barkhuysen showcased a new product, known simply as an ‘Element’ – a CPU/DRAM/Storage on a dual-slot PCIe card, with Thunderbolt, Ethernet, Wi-Fi, and USB, designed to slot into a backplane with multiple PCIe slots, and paired with GPUs or other accelerators. Behold, Christine is real, and it’s coming soon. Anything to compete with the default ATX design of a PC is welcome, and this looks incredibly interesting.
Overall, the launch of Comet Lake comes at a tricky time for Intel. The company is still trying to right itself from the fumbled development of its 10nm process node. While Intel finally has 10nm production increasingly back on track, the company is not yet in a position to completely shift its production of leading-generation processors to 10nm. As a result, Intel’s low-power processors for this generation are going to be a mix of both 14nm parts based on their venerable Skylake CPU architecture, as well as 10nm Ice Lake parts incorporating Intel’s new Sunny Cove CPU architecture, with the 14nm Comet Lake parts filling in the gaps that Ice Lake alone can’t meet. Another year, another Skylake spec bump. Intel sure is doing great.
Intel has been building up this year to its eventual release of its first widely available consumer 10nm Core processor, codenamed “Ice Lake”. The new SoC has an improved CPU core, a lot more die area dedicated to graphics, and is designed to be found in premium notebooks from major partners by the end of 2019, just in time for Christmas. With the new CPU core, Sunny Cove, Intel is promoting a clock-for-clock 18% performance improvement over the original Skylake design, and its Gen11 graphics is the first 1 teraFLOP single SoC graphics design. Intel spent some time with us to talk about what’s new in Ice Lake, as well as the product’s direction.
Intel’s Clear Linux Project has been on my radar for months, mainly because of its sheer dominance over traditional Linux distributions — and often Windows — when it comes to performance. From time to time I check in on the latest Phoronix benchmarks and think to myself “I really need to install that.” Up until recently though, the installer for Clear Linux was anything but intuitive for the average user. It also looked considerably dated. Version 2.0 gives the installer a complete overhaul. Aside from the fact it runs Gnome – which is not something I’d want to use – the main issue I have with this project is that it’s from Intel. The processor giant has had many Linux projects in the past, but it often just abandons them or doesn’t really know what to do with them.