It’s undeniably good for the Arm Windows app ecosystem to have a viable, decently specced PC that is usable as an everyday computer. The Dev Kit 2023 is priced to move, so there may be some developers who buy one just for the hell of it, which might have some positive trickle-down effects for the rest of the ecosystem.
Because eventually, the Windows-on-Arm project will need to develop some tangible benefit for the people who choose to use it. What you’re getting with an Arm Windows device right now is essentially the worst of both x86 and Arm—compatibility problems without lower power use and heat to offset them and so-so performance to boot. Apple has cracked all three of these things; Windows and Qualcomm are struggling to do any of them.
I’m just not entirely sure who Windows on ARM is supposed to be for. I want it to succeed – the more choice the better, and x86 needs an ass-kicking – but I don’t think the current crop of Windows on ARM devices are even remotely worth it. Either Qualcomm finally gets its act together and comes up with an SoC to rival Apple’s M series, or Microsoft takes matters into its own hands.
Either way, they’re going to need to do something about the performance of x86 code on Windows on ARM.
Who is it for? Well if its arm, then Jon Masters. He likely has one already. Didn’t realize he’d jumped ship to google. Thats very very interesting. Its likely he’ll just lead the charge on their internal arm migration, but man if he could focus some of his energy on the Tensor cores, that would be sweet.
It’s especially hilarious when you consider how much faster Windows 11 ARM running virtualized on Apple Silicon can be. A base model M1 Mac running Windows ARM via Parallels, translating x86 legacy software, runs circles around official Microsoft ARM hardware running the ARM version of the same software, let alone translating x86.
I really do hope there’s some serious collaboration between Microsoft and Apple on the Windows on ARM front
Linux and BSD enthusiasts who like porting their OS of choice to new hardware.
Good luck to both of those things.
Ampere does have chips which would performant enough for daily use, and Adlink does produce workstation level PCs, which people can buy right now. MS could release a competitive offering if they wanted to ask $5k for the kit.
https://www.ipi.wiki/pages/com-hpc-altra
Flatland_Spider,
It used to be that one would have to find an unpatched exploit to override microsoft’s secure boot requirements an ARM.
https://jwa4.gitbook.io/windows/tools/surface-rt-and-surface-2-jailbreak-usb
Is microsoft still trying to block owners from installing alternative operating systems on ARM? It really discourages me from considering commodity hardware like this even though the hardware might otherwise be suitable and affordable.
The hardware in your link looks really cool and I want one, but I can’t afford one at that price.
I agree to all of that. XD
Apparently someone tried to boot Linux on it. The article says it probably needs a device tree blob to be released.
https://www.onmsft.com/dev/linux-on-microsoft-dev-kit-2023
https://blog.alexellis.io/linux-on-microsoft-dev-kit-2023/
OpenBSD might already be working.
http://ix.io/4eex
It doesn’t look like any Secure Boot shenanigans, just normal Arm hardware initialization problems.
Annoying. Those should be a thing of the past, why didn’t they make it system ready?
https://www.arm.com/architecture/system-architectures/systemready-certification-program
Anyone have any clue?
That would have been smart. I’m not sure what the problem is, maybe Qualcomm?
The Ampere stuff is Server Ready certified, and I believe the RPi4 is System Ready certed when set to UEFI boot. Those are completely different ends of the spectrum, so this could have been certed.
You can actually set up a RPi4 to be system ready and then install windows arm on it. And windows has embraced System ready Ampere systems in azure, but for personal computers they don’t want it? I don’t get it. Maybe its another org chart thing
https://www.businessinsider.com/big-tech-org-charts-2011-6
Bill,
I’m not sure MS had much to do with Ampere supporting Server Ready as much as it was RH anointing UEFI was the way to boot Arm servers, and they weren’t going to mess around with device trees in the server space.
I’m not sure either. The Thinkpad x13s, the other consumer hardware with the same Qualcomm 8cx G3 chip, uses device tree to boot as well, but Lenovo released the blob. The problem may be related to Qualcomm.
Those are my hypotheses anyway. I don’t have access to any details, so it’s all speculation.
Upon reading your links I learned that microsoft doesn’t allow custom kernels to be booted in WSL VMs, and furthermore the kernels provided by microsoft disable KVM. Nested virtualization works fine for me on linux, but I’ll give MS the benefit of doubt that maybe there’s a technical reason that windows is unable to handle nested virtualization (I don’t know). But not allowing users to use one’s own linux kernel builds in WSL? Come on microsoft, you are kicking FOSS to the dirt with WSL restrictions like that. And clearly MS developers knew about this because instead of fixing it, they added a win32 error for it: WSL_E_CUSTOM_KERNEL_NOT_SUPPORTED.
https://github.com/microsoft/WSL/issues/8821
From your link…
I agree with this sentiment. The hardware really doesn’t seem bad but if they can’t/won’t get these issues sorted it’s unfit for linux developers. And given how popular linux is among developers, it’s hard to imagine this was a simple oversight by microsoft – it must have come up and somebody at microsoft must have made the conscious decision to not do anything about it.
Nested KVM or custom kernels is not anything I would have thought to do with WSL. Of course, I also haven’t touched Windows in years.
Maybe it’s a hardware problem. Specifically, maybe it’s a Qualcomm problem.
It probably depends on how much involvement MS had.
You’ve been around. Consumer ODMs rarely think about operating systems outside of Windows.
MS is a big company, and the business major responsible for managing the project has probably never heard of Linux.
Then there is Qualcomm, and I’m not sure what they’re doing besides milking cellular modem patents.
Flatland_Spider,
As someone who uses KVM regularly on “real” linux machines, it’s absence would be notable if I were forced to use a WSL2 version of linux.
Luckily for me I don’t regularly use windows or WSL either, but on this particular Project Volterra hardware, it’s a bummer that it cannot boot strait into linux. Hopefully one day it will.
I looked it up and the snapdragon processor used by Project Volterra supports a ARMv8.1-A instruction set.
https://chipguider.com/?cpu=qualcomm-snapdragon-8cx-gen-3
Apparently it needs ARMv8.3 for nested virtualization:
lwn.net/Articles/728193/
community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/armv8-a-architecture-2016-additions
It’s disappointing that Project Volterra doesn’t have it considering the extension is already six years old. This limitation means it’s critical that the host run linux directly instead of WSL2 if you need to use KVM for any reason.
I think this is much less probable for a machine specifically targeting developers though. I can understand why it would be a non-goal for microsoft, but the topic of booting linux must have come up internally.
Alfman,
That’s always the thing, I have baremetal Linux machines which are a network call a way for VMs. XD
Nothing I do requires special kernels, or kernel modules, so nested KVM is not something I would miss.
I was wondering if that was the case.
Nice link! 😀
It probably did, and that’s why Secure Boot isn’t locked like it has been in the previous MS Arm devices.
As Bill mentioned, releasing System Ready Arm machines would be preferable, and maybe the next one will be System Ready as MS gets more experience releasing Arm hardware.
If we look a decade back the exact same thing was being said. And it never happened. I doubt it will happen in the next decade. Unless some things change. Back then i wanted for ARM to succeed in PC segment. As it had this “open” aura. But it turned out this is a rather limited openness we are talking about. That is you can buy a license and design your own chips. Beyond that i don’t find anything all that open in regards to ARM. Some custom and limited ROMs for some Android devices. Raspberry Pi and alike is basically the only hardware available you can actually do something with it. And not to perceive it as a brick. Or a black box. And even with Raspberry Pi the situation in regards to upstream drivers is horrendous. On how Microsoft should do something about it. Unrealistic. They can’t do anything about it. It would never surpass their “Wintel” offering. If companies utilizing ARM would be prepare to cohere to some common standards. Would result to much more openness. Like with GNU/Linux upstream drivers. And be prepared to do much more then that. Then maybe Windows on ARM would stand a chance. But it won’t happen as we already have Intel. Already doing all that. On ARM side there is just too much fragmentation and partial interest involved. To compete with a player like Intel. By not becoming something of an Intel of the ARM world. Both approaches just serve different purposes. That is on why they are successful in their fields. Apple could do it as it is extremely vertically integrated company. Still their products and approaches are not suitable for general public computing. At minimum you need much more choice then Apple is offering. One device per area is just unrealistic. In regards to satisfying the needs in regards to general computing. Apple scales poorly in this regard.
Yes, it’s a weird Deja Vu moment, people who purchased the original Arm based Surface must be confused by the claims.
Even back then the new devices ran OK with native Apps, but the scale of the MS legacy means the first thing that happens is somebody wants to run XP era software in some sort of compatibility mode, and things either slow to a crawl or fall over, so we should never be surprised on mixed results.
For myself in supporting legacy hardware as well as legacy software I teeter between plummeting legacy software support and performance, and new hardware that runs too fast for legacy hardware. addons While most critics are overclocking, I’m often trying to slow things down so 80s, 90s or even early 00s era hardware will work somewhat reliably.
cpcf,
This is the double sided sword of microsoft’s monopoly. There’s so much legacy software that remains hard coded to x86 and x86_64 windows, which helps solidify the windows monopoly. Yet these same hard coded binaries impede microsoft from being able to support new architectures efficiently (ie without emulation).
While both Apple and MS face similar emulation overhead, supporting legacy software long term is a far lower priority in apple’s business model. For apple, emulation is merely a stop gap measure and for a limited time. Both users and developers are expected to migrate to ARM. But there currently is zero expectation of a big ARM wave for windows, which makes it extremely difficult to benefit from ARM as a windows user.
What hardware are you referring to? Most of the old hardware in my collection became unusable due to the lack of software/driver support in new versions of windows rather than actual hardware incompatibilities.
@cpcf
Part of the problem is indeed supporting (legacy) software. But in my opinion oppressing issues arise far before that. For example around a decade back you could buy an ARM laptop. And GNU/Linux in general had a good ARM support in regards to software support. Still the idea you would download some popular GNU/Linux distribution. Install it on such hardware. And start using the software it provides. Well. You couldn’t and you still can’t. It just never worked like that. Currently ARM is just to limited. To work outside some vertically integrated chain like with Apple. If we are talking about desktop computing. As for Microsoft and the idea they can force such limited systems to replace “Wintel”. Thanks but no thanks. ARM should do better. After we can discuss this further. For example to install stock Debian on Raspberry Pi and being able to boot. Before that gets sorted out thanks but no thanks ARM.
Geck,
You’re right and it frustrates me as well. Obviously cpcf is talking about windows users and you are talking about linux users so naturally you’re both coming from very different perspectives.
Linux adoption on ARM is tedious at best with every product needing drivers, having to be jailbroken, reverse engineered, etc. Many linux SBCs exist thanks to vendor blobs and official android support, though disappointingly they tend to be limited to older non-mainline android kernels. It’s a pretty sad state of affairs and a far cry from the open & commodity hardware that we all wanted ARM to be, but that’s life.
I share misgivings about the microsoft monopoly, but as a pragmatist I think they could be the best path forward for standardizing ARM PCs and solving the fragmentation mess that surrounds android vendors. But, critically, this all hinges on ARM hardware not being boot locked so that owners aren’t denied the right to install linux even though it’s otherwise fully compatible.
@Alfman
In my opinion Microsoft likely gets mentioned due to its position in regards to PC. On how because of that they are supposed to do something about it. To advance ARM in this area. But personally i don’t see it. On what exactly they should or could do and why. They have very limited grip and relationship with ARM. And for them at best the motivation is lets be more like Apple. Likely knowing they will never be as Apple as Apple is. Somebody else hence in my opinion will advance ARM in this area. Beyond Apple. If that is to happen. And on the long run i do wonder if Apple won’t make the switch again.
Geck,
I’m not saying they’re going to fix it, but if they do manage to do it in an equitable way then I’m not going to let my prejudices against their monopoly get in the way of an otherwise good thing for linux users like me. The end results are simply more important than holding a grudge.
I don’t think “lets be more like apple” captures the whole story. The x86 dependency precludes windows devices from being as energy efficient as ARM devices are for mobile. It’s not unreasonable nor unexpected for microsoft to want to support ARM, the main problem for them is the one cpcf mentioned: binary x86 software.
@Alfman
Then we basically agree. Microsoft is not going to fix it. In the end they are just not an ARM oriented company. They more or less perceive ARM as something they could use in some limited fashion. Under impression that if they can’t do it on “Wintel”. Them somehow they will succeed with ARM. To be more like Apple. That is. Reality being that stands no real chance against “Wintel” (or Apple). Hence i don’t see “old software” as the main problem here. They are just not in ARM business. They are a “Wintel” oriented company. And they won’t undermine that. To push ARM forward. It’s a job for some other company and ARM itself. To try to do that. Not Microsoft. As Microsoft is basically the main competitor of ARM in PC segment. As for prejudice and monopoly. None of that is at play when we are speaking about Microsoft and ARM. It’s just not a thing.
My main laptop is a second-hand ARM laptop, sold under the “Windows on ARM” slogan — a Lenovo Yoga C630, originally sold around 2019, based n the Snapdragon 850. It is not, and was never supposed to be, a speed demon — but with 8GB RAM and a quite fast SSD, it is quite snappier than my HDD-based i7 of the same age. But yes, as soon as I got it I installed Debian on it, so I’m unable to say how well did Windows run.
I just love this machine. Its weight, battery life and screen quality are what I wanted. Hardware support is far from great (I still haven’t find a way to get usable sound). But… you wanted to have a happy customer of this weird initiative? I am one.
Of course, without Windows 🙂
I continue to highly suspect Apple’s M series CPU’s main benefit is their bribing TSMC for exclusive rights on the smallest process nodes. As Samsung and other fabs get similar manufacturing capabilities, I expect the advantages to disappear.
dark2,
I don’t know what deals are going on behind closed doors, but clearly TSMC fabs gave both AMD and apple huges advantage over competing fabs (intel and samsung). Their fabs can catch up but their reputation has been dragged through the mud. Their reputation may be the hardest thing to rebuild.
When a customer drops a bag of cash and asks for exclusivity that’s not a bribe.
We’ll see. Apple picks and chooses their battles relative to the chips from Intel or AMD, and it’s more the accelerators then the CPUs itself which accounts for the leads.
Apple’s performance is shocking relative to other Arm chips, and that comes down to engineering. No other manufacturer has a reason to prioritize desktop Arm chips, so I suspect the lead will last for a while. The biggest concern is Nuvia hiring lots of people from Apple’s chip team, and then getting acquired by Qualcomm.
Qualcomm and Samsung have been making Arm chips longer then Apple, and they haven’t produced anything which is close to the M1/M2’s performance. I know Qualcomm has tried, but their efforts have been lackluster with little uptake. Unless things change, Apple is on an other level relative to the other Arm players.
Flatland_Spider,
I really dislike qualcomm and there’s no denying that their CPUs are lackluster compared to the competition (especially in terms of desktop performance). Regardless though dark2 has a point about exclusive fab advantages. Comparing products of different fab technologies is like comparing athletes from different weight classes. Historically fab access has often decided the winners.
If rumors are true that we are reaching physical limits of fab technology, then it seems conceivable that everyone could eventually end up using the same fab tech in the future. From there it will only be possible to grow outwards. The most obvious way to do that is just to keep adding more cores, but honestly this approach isn’t as interesting to me. I’d like to see other advancements, commodity CPUs that incorporate FPGAs for instance.
I’m not disagreeing. There’s a reason Apple payed TSMC for exclusivity, and why Intel screwing up their fab processes was such a gigantic error.
Comparing chips of different architectures is just as unfair, but we have to find a way. Qualcomm is going to have to find a way to reduce the power and squeeze out some more performance with the fab tech they have access to. Intel is going to have to find a way to compete, and so is AMD.
Intel and AMD chips have an advantage on the M1/M2, and they aren’t on the same fab node, or the same architecture.
I think the Ampere Altra chips might be faster then the M1/M2. I can’t find a benchmark comparing the two. The Altras usually get benchmarked against EPYC and Xeon procs.
That’s been the rumor for a while now. Like a decade or so? Are people still talking about graphene or switching away from silicon? I haven’t kept up since The Tech Report was sold.
Personal computing device side, I think we’re seeing the practical limits of core count and power usage. I joke about needing a 220v outlet in the future, but….
Serverside, power usage and heat dissipation is the problem companies need to get under control.
Flatland_Spider,
That’s an understatement 🙂 But I suppose they deserve credit for delivering as much performance as they do considering how far behind their fabs are lagging.
We can treat everything as a black box and not care about the whys & hows. I’m just concerned about how an unfair playing field can harm competition though.
Out of curiosity, what advantages are you thinking of?
I think ARM CPUs have a bit of an advantage thanks to less decode complexity in conjunction with higher code density. Intel & AMD can and do overcome this with more transistors and a much larger power budget. We accept this tradeoff, but it’s not great at performance per watt and IMHO ARM CPUs are going to continue to lead on efficiency metrics.
I guess you found this too:
https://www.anandtech.com/show/16315/the-ampere-altra-review/
With these massively parallel CPUs it typically comes down to “more cores running slower” to get more processing power overall. Apple’s MT scores aren’t the greatest so as long as we’re talking MT I agree with you that they don’t have a product that competes with Ampere Altra today.
Plug it in next to your electric range or dryer, it’ll be fine. 🙂
I’ve heard proposals to convert heat waste into municipal hot water, haha. But yeah the efficiency is a problem. I’m glad crypto currencies crashed, but a new fad might bring that roaring back.
Alfman,
AMD squeezed competitive performance out of inferior nodes for years before they went fabless. Sometimes it is the person using the tools who makes the difference.
Everything being equal would be ideal. 🙂
Raw compute. Apple chips are competitive with AMD and Intel, which is incredibly good given the track record for past desktop Arm chips. Apple does rely on accelerators to get many of the number they’re touting.
https://www.phoronix.com/review/apple-m2-linux
Code optimization probably plays a part in the advantage enjoyed by AMD and Intel.
I agree, power wise, Arm has a much bigger advantage considering it doesn’t need the decode hardware x86 needs. However, Arm chips which are competitive based on raw compute have shown to be just as power hungry as their x86 peers.
I was thinking of Phoronix’s benchmark of the 230w, 80 core Ampere Altra compared to Intel Xeon and AMD EPYC.
https://www.phoronix.com/review/ampere-altra-q80
Arm is probably the cleaner architecture going forward, and personal computing devices would benefit from the power efficiency of the Arm chips.
I feel desktop chips should be capped at 65w peak dissipation, so what do I know? XD
It will be my dryer. XD I’ll join a folding crew and do some science while I dry my clothes.
Some should figure out how to extract power out of the PC exhaust. https://en.wikipedia.org/wiki/Exhaust_heat_recovery_system
I’ve heard about DCs dumping hot water from the chillers through the plumbing. It created hot water toilets. XD
I think most DCs are recirculating their chiller water these days. 1: Reduce environmental impact from their water usage, 2: Reduce their water bill. Mostly 2.
Power density per square foot is a problem. Even before cybercoins, DCs were having problems with power density. 2007 a full populated blade enclosure would eat up the power budget for a 42U rack. 8-16 servers instead of 21-42 servers. optimistically 21 servers.
Flatland_Spider,
You are absolutely right about custom accelerators, but even on raw compute they were beating intel’s 15nm chips on ST performance on actual CPU compute benchmarks (ie not unfairly picked accelerator benchmarks). And AMD were beating them too. Intel remained uncompetitive until they could reduce node size again.
https://www.cpubenchmark.net/singleThread.html
Now that apple is looking at 3nm for M3, intel had better manage to reach its next generation nodes quickly or it will once again find itself in the same position of not having competitive chips because of older fabs. Intel is able to bridge the gap with significantly better cooling and a higher power budget, but only to a point. Fortunately for intel, apple has focused more on tiny form factors with high battery life, tradeoffs that compromise performance. Otherwise it is likely apple’s node advantage would become more pronounced on performance benchmarks.
Oh thanks for linking those benchmarks, I hadn’t seen them before. I wish he would categorize them into single threaded versus multithread. Oh well.
This could be true especially for hand tuned algorithms. You might be interested in this tangent, it’s the story of how John Carmack hand tuning Quake for intel processors made Cyrix processors look really bad even though they had excellent performance on general tasks.
https://www.techspot.com/article/2120-cyrix/
I still think ARM would win but I concede it’s difficult to obtain performance per watt information since so few benchmarks actually include power consumption. We can’t (or shouldn’t) just plug in the mean power or TDP numbers. I used to recall a motherboard vendor tool that displayed CPU power utilization on windows, but as far as I know there is not generic API that would work everywhere for benchmarks to take advantage of. And I’ve never come across linux tools that can do it either. Even if these existed, it could be unfair if they’re not measuring the exact same loads. Perhaps we need smart power supplies that provide power telemetry data.
RPI to the rescue 🙂
That’s either mad science, or it is brilliant, haha.
Edit:
“They told me I was stupid – heating my pool with computers”
https://www.youtube.com/watch?v=4ozYlgOuYis
I’ve got no pool to heat though.
Alfman,
I’m interested to see if the M3 will be the version to replace the Intel CPUs in the Mac Pro. I think that will be a real test to see if Arm can deliver the performance without shredding the power envelope.
True. Intel and AMD can disregard power requirements to a certain extent since their money is made in the server sector. Also, luckily, for them, Apple doesn’t sell chips to 3rd parties without being embedded in a Apple product. 🙂
XD Trade offs!
It would be nice if MBs had these sensors. :\
PCs switching to a UPS type hybrid PSU, or switching to DC input from a PSU, would be interesting. Most things require power bricks these days and I generally don’t run things without a UPS, so just make UPSs giant power bricks.
I’ve been trying to get my hands on a 8GB CM4 for over a year now. LOL
The SolidRun Honeycomb might be better for a desktop though, and MNT Reform has a new board which would be cool as a small desktop. I would need a different case or carrier board if the 4x switch was desired.
https://shop.mntmn.com/products/mnt-reform-ls1028a-module-preorder?taxon_id=13
LOL Scavenging waste heat is something that should be normalized in residential housing. 🙂
There are a lot of things which could be normalized in housing which would be helpful.
Flatland_Spider,
It will be interesting. I think AMD may be the one to beat though since intel is still paying a high power cost for lagging on fab technology.
I wish 3rd party vendors would be allowed to sell apple’s ARM CPUs, even people who don’t want apple devices could still benefit from their CPUs. Although I get why apple doesn’t want it’s CPUs competing with their own devices.
I would think many do monitor power under the hood because the bioses might need it to help regulate the CPU…? But I only recall one motherboard coming with windows software to view it.
I don’t know how many computers support this? After searching just now, I found an application screen shot that shows CPU watts. I doubt it is generic and probably needs to be reverse engineered for the monterboards in question (much like supporting temperature sensors and fans in linux).
https://openhardwaremonitor.org/
It’s not a bad idea, but unless you’ve got an enormous heat load and a convenient place to dump it, I think it could be hard to justify the logistical difficulties. Even something like a fridge, there’s no reason to dump that heat into the house in the summer, but it also seems like a lot of work to add outside ventilation and/or water cooling for just 50watts of average power. (this might be higher in the summer).
https://www.kompulsa.com/refrigerator-power-consumption-deciphering-the-label/
The installation and maintenance costs would likely be much higher for a fridge that isn’t contained. The TV cable box of all things produces an enormous amount of heat even when idle (inexcusable, but it is what it is). Does it make sense to run conduit and/or pipes to the living room? haha.
There is clearly a lot of waste heat, but I don’t know if there’s a practical way to recoup it especially given the low delta-temps.
Alfman,
This is true. The benchmarks would be Xeon (???) vs Threadripper (Genoa?) vs Mac Pro (Xeon) vs Mac Pro (Arm).
That’s cool. 🙂 It could be ACPI, but yeah, probably MB specific.
I want to say BMCs might show that, but I don’t have a server with a BMC to check right now.
It’s not a bad idea, but unless you’ve got an enormous heat load and a convenient place to dump it, I think it could be hard to justify the logistical difficulties. Even something like a fridge, there’s no reason to dump that heat into the house in the summer, but it also seems like a lot of work to add outside ventilation and/or water cooling for just 50watts of average power. (this might be higher in the summer).
Yeah. I mainly want to dump it into the attic, and use it to supplement the water heater. The other reason was I trying reason out how it would be possible to get fresh air into the house without losing all of the energy by opening a window.
I’m thinking solar panel would be involved to product some sort of solar thermal system. People are looking into adding ways to allow solar panels to generate electricity to from heat to allow them to generate electricity in a secondary way.
https://inhabitat.com/pvt-solar-panels-generate-heat-and-electricity-at-the-same-time/
This is me spitballing after spending some time looking into ways to generate electricity from the heat in my shed. I don’t have the shed anymore, so it’s a moot point.
That’s cool. I haven’t had to buy a fridge in a couple decades.
I forgot about cable boxes being fire hazards. LOL
Yes, but I hate dealing with residential cabling. LOL
That sounds like the beginning of a government grant. 🙂
I continue to suspect that you have zero understanding how chip manufacturing works.
javiercero1,
Textbook ad hominem attack. At least debate the substance rather than the person.
[Quote]
Either Qualcomm finally gets its act together and comes up with an SoC to rival Apple’s M series, or Microsoft takes matters into its own hands
[/Quote]
With the lawsuit from ARM against Qualcomm this could be getting not that easy.
Anyone has any Insider news about that lawsuit?
If ARM wants to compete in PC segment then building some processor with some specs. That is much less important then to continue the work done. To go beyond where Android failed. That is somebody needs to RTFM and start using GNU/Linux in the right way. That is to provide and maintain quality upstream device drivers for ARM hardware. Like companies such as Intel and AMD and partially Nvidia currently do. Once that happens the software problem is more or less resolved. Given the device is unlocked and it is able to boot a standard GNU/Linux distribution. Once you have that there is no real reason anymore on why one wouldn’t chose ARM over x86 for PC purposes. Before that happens thanks but no thanks. As x86 is superior and ARM still has some work left to do. Before it’s considered mature enough for PC segment. Raspberry Pi is a nice benchmark on where we currently stand. In the last decade around half of the work needed was completed. Now the other half.
Disclaimer: Microsoft employee who’s not really involved but is strangely emotionally attached
I’m not so sure. Emulating very old software is a low bar since it’s expecting very old hardware. The focus here is on getting ARM-native software, which is relatively easy in an open source, abstraction layer heavy world. The article mentions a couple Electron-based apps with sluggish performance, but since the browser engines are ARM native, this is a packaging issue, not an engineering one.
Personally I haven’t ended up with much x86 software running on my ARM device. The remaining pieces (like git) just aren’t performance critical. The CPU intensive things – browsers, compilers, etc – have already been ported.
The bigger issue which this article hints at is there’s no point using an ARM device if it can’t beat Intel on performance per watt, and these higher end systems with higher end power consumption don’t seem to do that. If this doesn’t change, ARM becomes a niche at the low power, highly portable end of the PC spectrum.
That said, once you have a lightweight fanless laptop with days of battery life, it’s really hard to go back.
malxau,
These are very good points. So long as we’re talking about “legacy software”, it’s probably not going to be greatly hindered by performance anyway. Performance isn’t the problem so much as efficiency is. Efficiency is the whole reason to opt for ARM, yet resorting to x86 emulation means one kills off the efficiency benefits of ARM.
I suspect gaming is probably a worst case scenario. Most titles are x86 only and emulation performance costs will detract from the experience on top of the inefficiency of it.
There are many intel models in the surface lineup that are fanless and personally I always plug in my devices, but yeah I hear what you’re saying.
Microsoft failing to migrate to ARM is the only thing that can save Linux in desktop. We need similarly open and replaceable hardware for ARM instead of the current batch of vendor-lockin crap from Apple and Qualcomm. Linux is going to be sent back 20 years if we were forced to use contemporary ARM hardware with community-developed drivers.
sj87,
I imagine most of us are expecting windows on x86 and ARM would continue to coexist side by side. I don’t see it happening any time soon but hypothetically if ARM were to radically wipe out x86, then you are right it becomes a whole lot more critical to FOSS that these ARM problems get fixed. The fragmentation issues we’ve been seeing with your average ARM devices would become a huge barrier for linux users wanting to use cheap commodity hardware like we currently do with x86.
I would hope that things become better and not worse, but if they don’t then it’s a valid concern, we need to be careful what we wish for.