While Loongson has been known for their MIPS-based Loongson chips that are open-source friendly and have long been based on MIPS, with MIPS now being a dead-end, the Chinese company has begun producing chips using its own “LoongArch” ISA. The first Loongson 3A5000 series hardware was just announced and thanks to the company apparently using the Phoronix Test Suite and OpenBenchmarking.org we have some initial numbers.
Announced this week was the Loongson 3A5000 as their first LoongArch ISA chip that is quad-core with clock speeds up to 2.3~2.5GHz. Loongson 3A5000 offers a reported 50% performance boost over their prior MIPS-based chips while consuming less power and now also supporting DDR4-3200 memory. The Loongson 3A5000 series is intended for domestic Chinese PCs without relying on foreign IP and there is also the 3A5000LL processors intended for servers.
Performance isn’t even remotely interesting – for now. The Loongson processors will improve by leaps and bounds over the coming years, if only because it will have the backing of the regime. I hope some enterprising people import these to the west, because I’d love to see them in action. Nothing in technology excites me more than odd architectures.
I am disappointed they didn’t buy VIA and start cranking out x86 chips. There is a possibility that the fact VIA is owned by a Taiwanese company has something to do with it.
Well a different chinese company basically did that, but I have to ask given that the system is basically designed to run Linux or other embedded OSs, why does x86 vs something else matter?
Malware attack not targeted to your platform ?
Fair point, but eventually every popular platform becomes a target
I always thought an uploadable ISA would be a good idea i.e. “writable control store” or “reconfigurable computing”. A CPU doesn’t have to be fast just gobbledegook to any hostile alien code that wants to run on it. Also one persons exploitable CPU bug is another persons honeypot.
@HollyB there have been some WISC (writable instruction set computers). I remember a dedicated WISC coprocessor in the 80s (actually called WISC) that had no native ISA, just a writable microcode store. IIRC it was used for Smalltalk and AI work.
The Xerox Alto was a WISC architecture with the default microcode emulating a Data General Nova with additional microcode for managing the display
PRIME Computers had a set ISA, and could be enhanced with additional microcode. Their INFORMATION database took advantage of that.
Hybrid CPU+FPGA architectures such as the Zynq, Intel XEON+FPGA Hybrid CPUs, and others could be seen as a continuation of this idea
@HollyB
The PDP-11/60 had user-writable microcode. Sadly, it was much more expensive, and no faster, than the top-performing PDP-11 at the time. Therefore it wasn’t commercially successful.
http://gunkies.org/wiki/PDP-11/60
Using x86 is just an unnecessary overhead, and the only reason for doing so would be compatibility with existing closed source software. If they are going to reuse an existing architecture, there are many better choices out there than x86 – hence their original decision to go with MIPS.
However one of the primary goals of this exercise is to gain independence from foreign technology, so it’s unlikely they would want to be running foreign closed source software compiled for x86 in any case – certainly not at the system level, and even at the user level (which can be emulated in the short term) they would be looking to replace it with native chinese developed software as soon as they can.
bert64,
I agree if there’s a good opportunity to use a cleaner instruction set it wouldn’t make sense to use an ISA that’s difficult to pre-fetch. However we shouldn’t underestimate just how important backwards compatibility has been for the software industry as it exists today.
In theory, if everything could be distributed as source code, it would make the ISA matter far less, but in practice it’s not always that simple. Even if you have full source code, it can still take work to port an application and you can get into dependency hell. Even on linux I use some software that I don’t have source code for, such as my graphics, CPU fan, UPS drivers. etc. Most linux games (ie those on steam) don’t have source code either, etc.
Hypothetically if all software were compiled and distributed as intermediate bytecode (like java), instead of native instructions, that would make the ISA nearly irrelevant. But it requires a lot of power to shift the entire industry. Nobody agrees on programming languages as is. Even assuming someone come up with a good architecture, it may not be that great in the eyes of the public without flawless software support. IMHO it’s tough to get there.
“In theory, if everything could be distributed as source code, it would make the ISA matter far less”
A growing community now exists of those using WebAssembly even for non-webbrowser stuff. Which can easily be turned into source code. Well Javascript that is, which might look a lot like asm.js and not the original C or Rust code I guess.
So basically it’s a bytecode format which makes it ISA independent. You port something like V8 and implement the ASM for the JIT and you have a working system on a new ISA.
Java was one of the few games in town for this, now we have a second runtime.
Lennie,
Asm.js is quite a hack, IMHO, I wouldn’t use it for anything but browser environments where backwards compatibility sets the rules.
Webassembly could work, but I haven’t used it yet so I can’t really say what my opinion is for it.
LLVM produces an intermediate representation. This is what google’s PNaCl was based on, which has been discontinued. But regardless if software were distributed as LLVM IR, it would be highly portable. The biggest challenge is actually getting all developers & operating systems to agree to change their practices.
Platforms like android have a better hold on this problem because they started out with portable bytecodes and then became popular, which means the portable code is dominant even making it possible to automatically translate between davlik and ART. But this isn’t normally the case with PC platforms (windows/mac/linux). So while it’s easy to envision what would have to change, the challenge is really getting there.
x86 licenses are not transferable.
But I believe AMD has some collaboration with a Chinese vendor.
Huawei has Beijing’s backing only on the political front, yet survived from the western’s onslaught of attacks.
What I’d say is more interesting is that China has their own line of X86 chips by a company called Zhaoxin and there isn’t shit AMD and Intel can say about it as they bought the rights from Centaur which thanks to their buying of Cyrix means they have the right to make their own X86 chips. I saw a review of one a couple years ago and they are up to about a 3rd gen i5 which considering they have only been at it a few years? Is really not fricking bad progress, especially considering it was a SOC so making boards for it should be pretty cheap once they ramp up production.
While I seriously doubt either company will be able to compete with the big boys the more choices and competition of ideas we have in this market? The better as far as I’m concerned, we saw what happened when Intel was left to stagnate during the AMD FX years so if we can get more companies with more ideas that sounds wonderful, like the old days when we had everything from SPARC to MIPS to ARM to X86 all doing their own things.
It’s not a case of “only been at it a few years”… As you said, they bought the initial tech from Centaur and Cyrix who have been making x86 compatible chips for a long time.
Both China and Russia have their own processor designs, and a strong desire not to be dependent on foreign technology. It’s likely other countries like Iran or North Korea have similar desires, but lack the resources to do so.
Actually it HAS only been a few years as Via’s last X86 product was the nano in 2005, since then they have been working on cryptographic hardware and their X86 line has been laying dormant.
So considering they bought tech that was last considered competitive when the Pentium 4 was mainstream? I’d say reaching 3rd gen Core in just under 4 years is pretty damn impressive. It would be like buying AMD’s Barton core design from the early 00s and making it usable as a desktop chip in just 4 years, that is a hell of a leap.
Did cyrix every have a license for x86-64? I wasn’t aware they did.
As I commented a few days ago Cyrix were bought out by National Semicoductor then absorbed by VIA. It’s all on wiki.
I’m not laughing at their CPU. Performance wise it’s competitive and more than adequate for the vast majority of busienss and consumer users. The point were CPU’s were adequate and no longer needed more oomph was passed a few eyars ago as everyone knows. For artists and gamers and anyone requireing faster then faster is available but for your everyday applications a CPU of this class plus 16GB and an SSD is enough. The only thig I cannot easily find is the CPU’s TDP. All the reviewers and news sites are so busy measuring dick length they forgot to mention power requirements.
I remember the switch over from single core to dual core and a magnitude increase of GPU shaders. My then desktop went pop so I bought a new one. I hadn’t planned this but ended up with an AMD X2 64 4200 and ATI 2600 XT. For development work this was more than good enough. The thing is noboday absolutely nobody in the games media discuxsed this and it blew the socks off every single game I threw at it on max settings for two solid years. Then as the next generation of CPU’s and GPU’s came out after six months it began to stutter and by the end of the year I needed medium to low to play any game.
On paper this Chinse CPU blows the socks off my laptop which blew the socks off my old desktop. The thing is it’s all relative and depends on your use case. as for my current laptop and the new desktop screen I have plugged into my dock the total TDP is about one third to one half what my old stuff used be. I also switched from halogen to LED lighting around the same time and my electricity bill is easily a third of what it used to be.
Of course Microsoft have decided to turn my computer into landfill because central dictat decreed. Thanks. How very Chinese state of them.
Personally, I’m shocked they are as performant as they are. Yeah they’re a few generations of cpus behind, but only like 6-7 years I think. That’s pretty impressive for a first release. These are not to be laughed at, I would not be surprised if they will catch up quick.
Excellent, I’ll bookmark that, because well.. We’ll talk about the results in a few years.
So it’s a MIPS-derived ISA. Just like RISC-V. So perhaps in a future they will use that.
The old Loongson’s were MIPS, the article saus that the 3a5000 is based on a new, original, ISA
Nothing’s ever “original”, so I bet there’s a lot of MIPS still to be found in there, just like RISC-V is a new, original ISA but based on MIPS. MIPS is pretty standard RISC after all, so designing a new RISC ISA will always have some MIPS feelings.
RISC-V is based on Berkeley RISC, MIPS on Stanford. So I am not sure how much cross pollination there was there. I mean both share the same kind of concepts, and those concepts flow through most RISC designs. But if you mean actual instructions, then feel free to provide examples of what you mean.
RISC-V may be based more on SPARC than MIPS if anything.