Linked by Thom Holwerda on Sat 20th Jan 2018 00:13 UTC
Hardware, Embedded Systems

The disclosure of the Meltdown and Spectre vulnerabilities has brought a new level of attention to the security bugs that can lurk at the hardware level. Massive amounts of work have gone into improving the (still poor) security of our software, but all of that is in vain if the hardware gives away the game. The CPUs that we run in our systems are highly proprietary and have been shown to contain unpleasant surprises (the Intel management engine, for example). It is thus natural to wonder whether it is time to make a move to open-source hardware, much like we have done with our software. Such a move may well be possible, and it would certainly offer some benefits, but it would be no panacea.

Given the complexity of modern CPUs and the fierceness of the market in which they are sold, it might be surprising to think that they could be developed in an open manner. But there are serious initiatives working in this area; the idea of an open CPU design is not pure fantasy. A quick look around turns up several efforts; the following list is necessarily incomplete.

Order by: Score:
Yes! We do need them!
by Poseidon on Sat 20th Jan 2018 02:20 UTC
Poseidon
Member since:
2009-10-31

Absolutely. The thing is that companies will not do this for a multitude of reasons, including probably some serious "security by obscurity" decisions they've done in the past, or just plain fear of competition from what others can gather.

Gotta keep that monopoly tight or the investors get nervous.

Reply Score: 4

RE: Yes! We do need them!
by Megol on Sat 20th Jan 2018 15:16 UTC in reply to "Yes! We do need them!"
Megol Member since:
2011-04-11

Absolutely. The thing is that companies will not do this for a multitude of reasons, including probably some serious "security by obscurity" decisions they've done in the past, or just plain fear of competition from what others can gather.


The majority of cases where I see anyone use "security through obscurity" it's wrong. Most times the people don't understand that obfuscation is a powerful tool together with other security measures, one that have been proven effective in practice since documented history.

But here I don't even see that misunderstanding. How is this even remotely relevant?


Gotta keep that monopoly tight or the investors get nervous.


No one stops you from using some freely available processor, physical or virtual. There are several available today. Making a new one running in an emulator or FPGA is trivial unless trying to do something innovative. Many people design processors for fun, from 8 bit CISC cores to 64 bit VLIWs.

(It is of course not trivial to make a realistic system for the masses)

But software is more expensive than the hardware in almost all cases. So you want the processor to run the software you need (or crave).

Open source software helps a bit. But you still have to conform to the expectations the source code have integrated- commonly Unix systems (or close enough) with a lot of external software. That limits the practicality of changing anything, from how the processor works to what instruction set the processor should have.

This also limits what types of protection is reasonable. AMD64 dropped the (admittedly flawed) segmentation based protection for a pure virtual memory based one - the one Unix and Windows NT expects (and most other systems). Why? Nobody used it.

X86 isn't a problem as such. The monopoly isn't really one. Emulation of the instruction set is unlikely to violate any patents and many parts of the ISA is already outside patent protection.

But what makes x86 still viable isn't a form of monopoly, it's software availability. X86 is supported by the majority of existing, still used, software. Alternatives aren't - but again if the limitations of current open source software are acceptable it helps a bit. Not completely though but you'll not hear that from OSS fanatics.

Reply Score: 4

RE[2]: Yes! We do need them!
by rom508 on Sat 20th Jan 2018 15:31 UTC in reply to "RE: Yes! We do need them!"
rom508 Member since:
2007-04-20

But what makes x86 still viable isn't a form of monopoly, it's software availability. X86 is supported by the majority of existing, still used, software.


I think it's somewhat irrelevant, since the vast majority of software is written in high level programming languages and does not directly depend on a specific ISA. If somebody started selling ARM or SPARC x10 cheaper and running x10 more performant than anything x86 could offer, then most software vendors would quickly rebuild and qualify their products for those platforms. It is all about market uptake and how many sales can be made.

Edited 2018-01-20 15:39 UTC

Reply Score: 1

RE[3]: Yes! We do need them!
by Megol on Sat 20th Jan 2018 15:35 UTC in reply to "RE[2]: Yes! We do need them!"
Megol Member since:
2011-04-11

"But what makes x86 still viable isn't a form of monopoly, it's software availability. X86 is supported by the majority of existing, still used, software.


I think it's somewhat irrelevant, since the vast majority of software is written in high level programming languages and does not directly depends on a specific ISA. If somebody starting selling ARM or SPARC x10 cheaper and running x10 more performant than anything x86 could offer, then most software vendors would quickly rebuild and qualify their products for those platforms. It is all about market uptake and how many sales can be made.
"

I agree in theory but in practice it isn't as simple as that. High-performance software is likely to be designed around the SIMD vector support of x86 for instance.

Reply Score: 3

RE[3]: Yes! We do need them!
by ahferroin7 on Mon 22nd Jan 2018 12:51 UTC in reply to "RE[2]: Yes! We do need them!"
ahferroin7 Member since:
2015-10-30

Except that a very large number of programs written in high-level languages still use (either directly or indirectly) a not insignificant amount of code written in assembly language, which is pretty much by definition not portable, and quite often ends up being the performance critical section of the program (that is, it's in assembly because it's performance critical). Go take a look at OpenSS, FFMPEG, or pretty much anything else that does high-performance crypto or DSP stuff if you don't believe me.

As an example, Windows doesn't run on anything but x86 and ARM (today at least, back in the NT4 days it ran on almost everything), and there's still shitty support on ARM from most vendors despite the fact that ARM CPU's are already significantly cheaper than x86 and are generally higher performance per-watt (which is what really matters for most of the big name users out there who drive the industry).

Reply Score: 3

RE[2]: Yes! We do need them!
by dionicio on Mon 22nd Jan 2018 15:36 UTC in reply to "RE: Yes! We do need them!"
dionicio Member since:
2006-07-12

"No one stops you from using some freely available processor, physical or virtual. There are several available today. Making a new one running in an emulator or FPGA is trivial unless trying to do something innovative. "

I'm on the idea that -just as the CPU industry, FPGA industry was also driven into strong oligopoly, with strong codependencies to the military industry complex.

Status Quo of West Side Electronics Industry is not casualty -Never has been, from memory. Our fragility is deliberate. Out of ambition of very small, very big pockets, interest groups.

Reply Score: 3

RE: Yes! We do need them!
by dionicio on Mon 22nd Jan 2018 15:27 UTC in reply to "Yes! We do need them!"
dionicio Member since:
2006-07-12

Diamond dust tomography exists well before microelectronics industry.

Security through obscurity is only for the average Jane and Joe.

Only difference here is that Jane and Joe won't know what's wrong with your design.

Obscurity and secrecy are part of the big problem keeping West near to saying good-bye right now.

People doesn't swallow pills, nor red, nor blue, if trust is DEAD.

Reply Score: 1

yeah, right!
by sergio on Sat 20th Jan 2018 06:02 UTC
sergio
Member since:
2005-07-06

I'd love to buy open source CPUs!! As the article mention, this is not a new thing, in fact, vendors like IBM or Sun/Oracle've been promoting the open source approach to CPU development for at least 10 years (google OpenPOWER or OpenSPARC). Obviously nobody cares about Power or SPARC anymore... and that's why IBM/Oracle open sourced them in the first place. There's no value on them.

The problem is, x86 CPUs aren't a commodity yet, there's a lot of value on them and people are willing to pay PREMIUM prices for new intel or AMD "innovations"... so forget about intel/AMD open sourcing CPUs. xD

Edited 2018-01-20 06:04 UTC

Reply Score: 2

RE: yeah, right!
by Kochise on Sat 20th Jan 2018 06:50 UTC in reply to "yeah, right!"
Kochise Member since:
2006-03-03

What about RISC-V ? You have almost everything you can dream of to start with here :

https://opencores.org/projects

Reply Score: 2

RE[2]: yeah, right!
by JohnnyO on Sat 20th Jan 2018 10:35 UTC in reply to "RE: yeah, right!"
JohnnyO Member since:
2009-10-15

FreeBSD supports RISC-V for some time now:
https://wiki.freebsd.org/riscv
Also, Linux probably supports RISC-V also.

But, AFAIK, there is no application RISC-V CPU in production, yet.

Reply Score: 2

RE[3]: yeah, right!
by ahferroin7 on Mon 22nd Jan 2018 13:01 UTC in reply to "RE[2]: yeah, right!"
ahferroin7 Member since:
2015-10-30

Yes, Linux has preliminary support for RISC-V (it's enough that you can boot a system and so some stuff from userspace, but it's still not really feature complete compared to x86, ARM, POWER, SPARC, or MIPS).

As far as not having any physical implementations, that's not really an issue, RISC-V was designed to be essentially trivial to synthesize as a soft-core on an FPGA or similar device. Despite that, I'm not entirely sure that there aren't any physical implementations, Western Digital has been talking about using it for their disk controllers (that is, the logic on the disk itself), and I very much doubt that they wouldn't use an ASIC implementation of a RISC-V core for that.

Reply Score: 2

OK any volunteers then?
by rom508 on Sat 20th Jan 2018 11:38 UTC
rom508
Member since:
2007-04-20

I'm sure it has been done on small scale, but there is another question. Anybody feeling clever enough to release open source x86 CPU design, verify end-to-end and then have it manufactured so that it can compete with Intel/AMD? There is nothing stopping you really, apart from huge amount of time, money and resources and then you could just give it away into public domain and feel really smug (or stupid) about yourself. I'm sure Intel/AMD would love to manufacture this incredible CPU without having to spend millions on R&D. Thom you can start the revolution and we'll see how you get on.

Reply Score: 1

RE: OK any volunteers then?
by Kochise on Sat 20th Jan 2018 12:49 UTC in reply to "OK any volunteers then?"
Kochise Member since:
2006-03-03

Ever heard of patent, nda, royalties ? The x86 isa is under embargo by Intel, only amd can use it in exchange of 64 bits support in return. Why do you think Cyrix, Via and other have quit the x86 market ?

There's a better chance with arm, especially armv8.

Btw Quallcomm will buy nxp, which bought freescale some time ago. There's a load of semiconductor portfolio here, why not asking them for an improvement of the 68k architecture and opening it ?

Reply Score: 2

RE[2]: OK any volunteers then?
by rom508 on Sat 20th Jan 2018 13:48 UTC in reply to "RE: OK any volunteers then?"
rom508 Member since:
2007-04-20

Assuming there were no patents around x86, it would be difficult to come up with a brand new, open source design that could compete with Intel/AMD designs. They put a lot of investment into R&D and have very smart hardware engineers working full time. This is part of the reason other commercial CPUs (e.g. ARM) are not so prominent in the data centres. There is a lot of software that still needs good single thread performance and Intel/AMD CPUs tend to offer better performance at a better price. I really hope ARM will close the gap and we will see more ARM servers from various vendors. Now if you start looking at things like RISC-V, there is a lot of hype, but not many viable hardware platforms that can be used by enterprises.

Reply Score: 3

RE[2]: OK any volunteers then?
by Megol on Sat 20th Jan 2018 15:33 UTC in reply to "RE: OK any volunteers then?"
Megol Member since:
2011-04-11

Ever heard of patent, nda, royalties ? The x86 isa is under embargo by Intel, only amd can use it in exchange of 64 bits support in return. Why do you think Cyrix, Via and other have quit the x86 market ?


VIA haven't quit. Centaur technology are still up and running.


There's a better chance with arm, especially armv8.


ARM is one of the most litigious processor designers around, the ARMv8 is absolutely littered with patents and that would be a better choice than e.g. RISC-V that isn't patented?

ARMv2 could be a good choice for an open processor as it's relatively simple, relatively fast per transistor and not patent protected anymore.

Of course as long as one accepts writing the required compiler support designing a RISC ISA is easy.


Btw Quallcomm will buy nxp, which bought freescale some time ago. There's a load of semiconductor portfolio here, why not asking them for an improvement of the 68k architecture and opening it ?


Yeah that will not happen. And while the 68k was a very nice design I don't see it as suitable as a new open ISA.

Reply Score: 5

RE[2]: OK any volunteers then?
by FlyingJester on Mon 22nd Jan 2018 18:38 UTC in reply to "RE: OK any volunteers then?"
FlyingJester Member since:
2016-05-11

We already have open source UltraSparc designs that are quite capable. The patents on SH2 have also run out.

Reply Score: 3

RE[3]: OK any volunteers then?
by jockm on Mon 22nd Jan 2018 20:01 UTC in reply to "RE[2]: OK any volunteers then?"
jockm Member since:
2012-12-22

We already have open source UltraSparc designs that are quite capable. The patents on SH2 have also run out.


And there is an open implementation of SH2: J-Core.org

Reply Score: 3

RE[4]: OK any volunteers then?
by FlyingJester on Mon 22nd Jan 2018 21:22 UTC in reply to "RE[3]: OK any volunteers then?"
FlyingJester Member since:
2016-05-11

Right. Why don't we see effort put into these designs, which already have proven and mature hardware, OS, and compiler support?

Reply Score: 2

RE[5]: OK any volunteers then?
by jockm on Mon 22nd Jan 2018 21:49 UTC in reply to "RE[4]: OK any volunteers then?"
jockm Member since:
2012-12-22

Right. Why don't we see effort put into these designs, which already have proven and mature hardware, OS, and compiler support?


Because until recently there hasn't been much interest in open silicon outside of the hobbyist and research space. The angel funding J-Core, and commercial adoption of RISC-V are examples of this changing

Reply Score: 3

RE: OK any volunteers then?
by agentj on Sat 20th Jan 2018 13:08 UTC in reply to "OK any volunteers then?"
agentj Member since:
2005-08-19

Sorry, this ain't gonna happen. This is as smart as trying to make open source space shuttle or death star ... There are so many resources and knowledge needed to design, verify and manufacture a processor than can compete with anything from ARM, AMD or Intel that some random cave dwellers that have idea of making open source processors can never do it. In big companies which design CPUs there are specialists even for "simple" things like transistor layout, packaging, integrated circuit routing, simulation design, verification design, verification of verification, etc. - this is not something that you can learn from stackoverflow. This shit is done by multiple PhD level folks.
There is also lots of multi-million dollar software required to simulate and design such processor - and this software does phone home, so no company is stupid enough to release such software to a public.
Current open source hardware "state of art" is 50 years behind of everything else.

Reply Score: 2

RE[2]: OK any volunteers then?
by Kochise on Sat 20th Jan 2018 14:48 UTC in reply to "RE: OK any volunteers then?"
Kochise Member since:
2006-03-03

People can always try to redo an open 6502 to start with, then improve the design iteratively in the next 200 years to reach something like the 286.

Reply Score: 2

RE[3]: OK any volunteers then?
by jockm on Sat 20th Jan 2018 15:46 UTC in reply to "RE[2]: OK any volunteers then?"
jockm Member since:
2012-12-22

People can always try to redo an open 6502 to start with, then improve the design iteratively in the next 200 years to reach something like the 286.


You should really check out OpenCores.org, because (https://opencores.org/project,lattice6502) those 200 years went by pretty fast (https://opencores.org/project,ao486)

But you are right that it isn't a 286. I couldn't find an implementation of that there, so I had to settle for a 486...

Reply Score: 6

RE[4]: OK any volunteers then?
by Kochise on Sat 20th Jan 2018 17:36 UTC in reply to "RE[3]: OK any volunteers then?"
Kochise Member since:
2006-03-03

I was the one providing the 'opencores' link above, so I know what is available or not. It was mostly to point out that if an open source core implementation have to be found, it is there already. The problem is not to have it and improve it, the problem is there is no simple 'make build' command for that kind of things.

OSS do not always pass conformance testing before release, I suspect there is something more to do in the QC of a chip that just 'release half baked and patch later'. That's probably why engineers get paid more than a teenager's room in their parents' flat. Real and ambitious fantasies requires more commitment than just posting them on the internet.

Reply Score: 1

RE[4]: OK any volunteers then?
by zima on Mon 22nd Jan 2018 09:29 UTC in reply to "RE[3]: OK any volunteers then?"
zima Member since:
2005-07-06

Availability of 6502 is crucial if we want Bender and Terminators! ;)

Reply Score: 5

RE[4]: OK any volunteers then?
by juzzlin on Tue 23rd Jan 2018 14:06 UTC in reply to "RE[3]: OK any volunteers then?"
juzzlin Member since:
2011-05-06

An RTL level design and a physical design are two different things. People have used open cores to run Linux on an FPGA. There's nothing special in that. The problem is to make an ASIC that actually runs faster than 100 MHz.

Reply Score: 1

RE[5]: OK any volunteers then?
by jockm on Tue 23rd Jan 2018 20:25 UTC in reply to "RE[4]: OK any volunteers then?"
jockm Member since:
2012-12-22

The problem is to make an ASIC that actually runs faster than 100 MHz.


I would argue that there are three thresholds when you are talking about fabricating CPUs, Below 100Mhz, below 1GHz, and above 1Ghz.

I agree that below 100Mhz is relatively easy, but the pool of people with sub Ghz experience is growing, and it isn't that bad. The real trick is Ghz and higher.

But this is also where the open source analogy comes into play. Open source started fairly modest, compilers and other tools, but it grew till we felt confident using it for key infrastructure. I very much think open silicon is going to follow a similar trajectory

Reply Score: 3

RE[6]: OK any volunteers then?
by Darkmage on Tue 23rd Jan 2018 21:00 UTC in reply to "RE[5]: OK any volunteers then?"
Darkmage Member since:
2006-10-20

It makes sense that in any field, as tools and experience increase. The difficulty and barriers to entry decrease. The larger an open source hardware project gets the more able to target larger projects it should be.

Reply Score: 2

RE[2]: OK any volunteers then?
by dionicio on Mon 22nd Jan 2018 16:18 UTC in reply to "RE: OK any volunteers then?"
dionicio Member since:
2006-07-12

I have no problem with table and knife at the kitchen. And they're quite old. I TRUST this pair.

Reply Score: 2

lsatenstein Member since:
2006-04-07

The subject line says it all. We would be better off if one of AMD or Intel would open source their existing CPUs. But that is not likely.

The time and cost to develop the fabrication masks and the cost to setup labs to do the testing. Testing prototype cpus, and sample cpus would likely cost into the hundreds of millions.

And like every design, software or hardware, there thousands of states in which the CPU can be found. A state is where the cpu is now after executing an instruction, and what is the next instruction to be executed. Think of the states as a set of points, a polygon, where the transition from one state to the other is a connected line. Some lines just can't exist, and some lines are yet undiscovered.

Back to rolling your own design. You must find funding and you can't be in a hurry. The x86 started with the simple 8088 back in 1980. Almost 40 years later we still find bugs in CPU's.

Reply Score: 1

dionicio Member since:
2006-07-12

A physical CPU is a finite machine, Isatenstein. If bugs remaining on the 8088 DESIGN, is about not enough testing.

Think of it as of a very long stair-cased signal processor.

Reply Score: 2

And the funding will come from...?
by fmaxwell on Sat 20th Jan 2018 15:13 UTC
fmaxwell
Member since:
2005-11-13

Sounds like a great idea. Anyone have tens of millions of dollars sitting around to donate to fund this effort, including hiring people with expertise to accomplish this? I'm not aware of any chip fabs that are capable of 10-20nm production that are interested in volunteering their services, equipment, expertise, and materials for this venture.

Reply Score: 2

rom508 Member since:
2007-04-20

Why are people down voting yours and other comments? Do they not believe in the freedom of speech and giving everybody a chance to express their opinion without being labeled as "Inaccurate" or "Troll"? Why does OSNews (and other forums) provide tools to do such things? None of the comments in this thread are awful or offensive. When people attach negative labels/scores to other people's views just because they disagree with them, such people act in a very undignified way and they should be ashamed of themselves.

Reply Score: 0

Athlander Member since:
2008-03-10

Why are people down voting yours and other comments? Do they not believe in the freedom of speech and giving everybody a chance to express their opinion without being labeled as "Inaccurate" or "Troll"? Why does OSNews (and other forums) provide tools to do such things? None of the comments in this thread are awful or offensive. When people attach negative labels/scores to other people's views just because they disagree with them, such people act in a very undignified way and they should be ashamed of themselves.


Maybe some adults find juvenile sarcasm to be trollish enough for a downvote.

Reply Score: 6

rom508 Member since:
2007-04-20

People who look down on other people don’t end up being looked up to.

Reply Score: 0

zima Member since:
2005-07-06

It seems you live in a fantasy...

Reply Score: 4

jockm Member since:
2012-12-22

I am not saying it is identical argument, but it is similar to the argument from the early days of open source. You don't need volunteers, money can come from a variety of sources, including commercial.

The J-Core project (http://j-core.org/) is being angel funded to produce a implementation of the SH-2 processor (why? because all of it's patents have expired), and now the SH-4 (since they expire this year). This provides a firm foundation for adding other extensions of bit depth, coprocessors, etc.

Open CPUs will probably first find their foothold in the embedded space. For example Western Digital — which ships over a billion ARM cores in their devices — announced their plans to transition to RISC-V late last year (https://riscv.org/2017/12/designnews-article-western-digital-transit...).

Eventually someone will see the opportunity to make a high end RISC-V, or J Core, or whatever and raise the money to produce the silicon.

Reply Score: 5

Alfman Member since:
2011-01-28

fmaxwell,

Sounds like a great idea. Anyone have tens of millions of dollars sitting around to donate to fund this effort, including hiring people with expertise to accomplish this? I'm not aware of any chip fabs that are capable of 10-20nm production that are interested in volunteering their services, equipment, expertise, and materials for this venture.


That's the problem. I think it would be possible to successfully design an open CPU in an advanced university setting, but many research universities will claim ownership over the results for themselves.

Maybe someone could do it at home if people were willing & able to donate a lot of time to the cause, but x86 would be extremely tedious since it has so much legacy baggage these days. Going for another architecture is possible, but then you'd need to compete on two fronts for both hardware AND software. Incompatibility with popular software has killed many alternative architectures in the past (including intel's own itanium).


In any case, I think it's feasible for an open CPU to reach it's *design* objectives without too much investment, but then comes the problem of fabricating it. Nobody has fabs that can hold a candle to intel, who have spent billions to build their mass production facilities. It seems to me that on this level, an open CPU would be at a large technological disadvantage regardless of a successful design.

Edit: It would be nice if intel itself would take interest in an open CPU, but would that be in their business interests? Hypothetically it could happen if some governments started mandating it out of security concerns. We know that the security concerns are well-founded, but it's not clear the government would be on the same page with regards to fixing them (ie NSA, GCHQ).

Edited 2018-01-20 19:18 UTC

Reply Score: 1

Megol Member since:
2011-04-11

fmaxwell,

"Sounds like a great idea. Anyone have tens of millions of dollars sitting around to donate to fund this effort, including hiring people with expertise to accomplish this? I'm not aware of any chip fabs that are capable of 10-20nm production that are interested in volunteering their services, equipment, expertise, and materials for this venture.


That's the problem. I think it would be possible to successfully design an open CPU in an advanced university setting, but many research universities will claim ownership over the results for themselves.
"

*cough* RISC-V *cough*


Maybe someone could do it at home if people were willing & able to donate a lot of time to the cause, but x86 would be extremely tedious since it has so much legacy baggage these days. Going for another architecture is possible, but then you'd need to compete on two fronts for both hardware AND software. Incompatibility with popular software has killed many alternative architectures in the past (including intel's own itanium).


All true. But in the world today we have Linux used everywhere, higher level languages used for everything (including stuff they clearly aren't suitable for) and Microsoft supporting ARM for native Windows NT!

So perhaps it would work out now. Guess we'll see.


In any case, I think it's feasible for an open CPU to reach it's *design* objectives without too much investment, but then comes the problem of fabricating it. Nobody has fabs that can hold a candle to intel, who have spent billions to build their mass production facilities. It seems to me that on this level, an open CPU would be at a large technological disadvantage regardless of a successful design.


The process lead have actually decreased greatly and TSMC or Global Foundries can provide processes with similar specifications. Note that Intel have failed in launching several processes as planned lately leading to them revising their roadmaps a number of times.

Realities of the modern ASIC also mean that the advantages of fully custom design have decreased, people don't do dynamic logic anymore for instance, so having more resources to optimize a certain design can only give so much extra performance.

So IMHO the supremacy of Intel processes and designers which I have extolled frequently in the past (when discussing x86 disadvantages mostly) is much less relevant today. Still there but...


Edit: It would be nice if intel itself would take interest in an open CPU, but would that be in their business interests? Hypothetically it could happen if some governments started mandating it out of security concerns. We know that the security concerns are well-founded, but it's not clear the government would be on the same page with regards to fixing them (ie NSA, GCHQ).


Don't think that would ever happen unless the world forces them.

And in the end neither Meltdown nor Spectre are x86 specific - Meltdown will get fixed, Spectre solutions will apply as much to x86 as any other architecture. So no need to replace their winning horse any time soon.

Reply Score: 4

Alfman Member since:
2011-01-28

Megol,

All true. But in the world today we have Linux used everywhere, higher level languages used for everything (including stuff they clearly aren't suitable for) and Microsoft supporting ARM for native Windows NT!

So perhaps it would work out now. Guess we'll see.


Well microsoft supporting ARM is a mixed blessing. Many ARM platforms have evolved to be vendor specific, which is bad. So on the on hand I'd like for MS to succeed to promote much needed software standardization in the ARM space. But on the other hand, MS certification for ARM processors requires they be vendor locked to microsoft's boot keys and explicitly prohibit owner control (insert obscene expletives here). This would be just awful, the exact opposite of what we need.


The process lead have actually decreased greatly and TSMC or Global Foundries can provide processes with similar specifications. Note that Intel have failed in launching several processes as planned lately leading to them revising their roadmaps a number of times.


I don't know, it seems to me that for any alternatives to succeed, consumers would have to be willing to incur less support and pay a premium for them until they are able to catch up to intel's scales of economy. That's a very hard sell IMHO. The tech world seems to be consolidating rather than expanding. While I don't believe this is due to the lack of merit from alternatives, the economic realities pose a hard barrier.


Realities of the modern ASIC also mean that the advantages of fully custom design have decreased, people don't do dynamic logic anymore for instance, so having more resources to optimize a certain design can only give so much extra performance.

So IMHO the supremacy of Intel processes and designers which I have extolled frequently in the past (when discussing x86 disadvantages mostly) is much less relevant today. Still there but...


I'd like to play around with ASICs, too expensive for me though. The FPGAs are coming down in price, but still expensive. I think it's only a matter of time before PCs contain FPGAs for application specific acceleration. It seems to me there's more potential for major performance gains in FPGA than continuing to evolve conventional CPUs. Maybe an open CPU should be designed to incorporate this?


And in the end neither Meltdown nor Spectre are x86 specific - Meltdown will get fixed, Spectre solutions will apply as much to x86 as any other architecture. So no need to replace their winning horse any time soon.


Meltdown is arguably an intel specific bug, not something other CPUs would necessarily be prone to (ie AMD doesn't have that bug). Spectre is a more fundamental symptom of speculative execution. Eliminating those leaks is difficult because the mere fact that the speculation engine succeeded or failed to improve branching performance can be measured and therefor leak information about it's state.

In theory, the only way a speculative engine could leak ZERO information if it had the exact same side effects every time it ran INCLUDING the time it took to run, but herein lies the dilemma! How can it take the same amount of time to speculate branches successfully versus misprediction? What good is a speculative engine if it is not allowed to return a result early?

Edited 2018-01-20 21:21 UTC

Reply Score: 2

Megol Member since:
2011-04-11

Megol,

"All true. But in the world today we have Linux used everywhere, higher level languages used for everything (including stuff they clearly aren't suitable for) and Microsoft supporting ARM for native Windows NT!

So perhaps it would work out now. Guess we'll see.


Well microsoft supporting ARM is a mixed blessing. Many ARM platforms have evolved to be vendor specific, which is bad. So on the on hand I'd like for MS to succeed to promote much needed software standardization in the ARM space. But on the other hand, MS certification for ARM processors requires they be vendor locked to microsoft's boot keys and explicitly prohibit owner control (insert obscene expletives here). This would be just awful, the exact opposite of what we need.
"

Yes perhaps it is (some would probably argue against that). However it shows they are again willing to provide support for processors outside the x86 family, and this time even providing emulation of x86.

Of course Windows NT were developed on RISC machines and have (according to several sources) always been written as portable code so actually porting to ARM isn't technically interesting but strategically so: Microsoft is willing to provide support of platforms outside the x86 world.


"The process lead have actually decreased greatly and TSMC or Global Foundries can provide processes with similar specifications. Note that Intel have failed in launching several processes as planned lately leading to them revising their roadmaps a number of times.


I don't know, it seems to me that for any alternatives to succeed, consumers would have to be willing to incur less support and pay a premium for them until they are able to catch up to intel's scales of economy. That's a very hard sell IMHO. The tech world seems to be consolidating rather than expanding. While I don't believe this is due to the lack of merit from alternatives, the economic realities pose a hard barrier.
"

Intel is a steamroller and they could dump their prices making it hard for someone else to enter the market. A good example is when they gave rebates to e.g. Dell to sell Pentium 4 processors which in some cases meant Intel gave Dell money to take their processors!

However I think a new processor design could help reduce the chances of Intel doing this again. E.g. RISC-V is truly open in a way few designs have been in the past and can be multi-sourced like nothing before in the history of computers.

IMO a bigger problem could be if RISC-V succeeds big and Intel begin designing and selling their own version of it with some proprietary extension. Being a giant that extension could soon be a requirement for mainstream software and so forcing other vendors to adapt it.

And done right Intel could on one hand say "see how open we are!" and on the other force other vendors to have a less efficient implementation via patents _even_ if the extension is free to use.


"Realities of the modern ASIC also mean that the advantages of fully custom design have decreased, people don't do dynamic logic anymore for instance, so having more resources to optimize a certain design can only give so much extra performance.

So IMHO the supremacy of Intel processes and designers which I have extolled frequently in the past (when discussing x86 disadvantages mostly) is much less relevant today. Still there but...


I'd like to play around with ASICs, too expensive for me though. The FPGAs are coming down in price, but still expensive. I think it's only a matter of time before PCs contain FPGAs for application specific acceleration. It seems to me there's more potential for major performance gains in FPGA than continuing to evolve conventional CPUs. Maybe an open CPU should be designed to incorporate this?
"

Programmable hardware is interesting IMO but hard to get right. In a way it's caching algorithms into partially programmable hardware but unlike caching ordinary data both extraction and programming have huge overheads.

Haven't looked at the state of the art of programmable hardware for several years so maybe there are good solutions now, otherwise I'd think that (again) RISC-V would be a good choice as it is designed to be extensible so hooking up support later should be possible.


"And in the end neither Meltdown nor Spectre are x86 specific - Meltdown will get fixed, Spectre solutions will apply as much to x86 as any other architecture. So no need to replace their winning horse any time soon.


Meltdown is arguably an intel specific bug, not something other CPUs would necessarily be prone to (ie AMD doesn't have that bug). Spectre is a more fundamental symptom of speculative execution. Eliminating those leaks is difficult because the mere fact that the speculation engine succeeded or failed to improve branching performance can be measured and therefor leak information about it's state.
"

It's worse than that: information leaks are physically impossible to plug completely. And physics also require we have some structures that are relatively easy to leak information with: Caches. Speed of light is a PITA.


In theory, the only way a speculative engine could leak ZERO information if it had the exact same side effects every time it ran INCLUDING the time it took to run, but herein lies the dilemma! How can it take the same amount of time to speculate branches successfully versus misprediction? What good is a speculative engine if it is not allowed to return a result early?


While true you don't state the general case: the only way no information leakage can occur is that an algorithm processing some data always have to take the same time, always require the same amount of power, always load each part of the computer in the same way and always leak the same electromagnetic signals.

The problem with speculative exploits isn't as much that it leaks information but how fast and how exact that information is leaked. If it'd take 100 years to leak a byte of data with 80% chance of being correct most wouldn't have a problem.

Spectre can be solved by making sure speculative information isn't leaked. That one can do timing attacks via caches isn't a problem then as the information bandwidth is severely decreased and isn't a problem in most cases.

One way to do this would ensure that speculative data reads can't influence non-speculative data, doing a cache fill due to a speculative read will not cause non-speculative data to be removed.
Dedicated resources for speculative data preferably on each cache level would mean there are no visible flow of data due to speculation.

One could still point out that even with such a design in place one could e.g. in theory detect that data have flowed into the L3 cache as a known fill to another processor sharing the L3 cache happens to be slower than it should theoretically be.

(Sorry for the wall of text!)

Reply Score: 3

Alfman Member since:
2011-01-28

Megol,

While true you don't state the general case: the only way no information leakage can occur is that an algorithm processing some data always have to take the same time, always require the same amount of power, always load each part of the computer in the same way and always leak the same electromagnetic signals.


Yes, these kinds of physical side effects have long been known to leak information about time dependent algorithms like RSA. CRTs were notorious for RF leaks. Probably even keyboards emit RF. CPUs less so, because of large capacitors and such high frequencies. But the thing is these physical side effects generally require physical attacks. Whereas the spectre attack is achieved with pure software.


The problem with speculative exploits isn't as much that it leaks information but how fast and how exact that information is leaked. If it'd take 100 years to leak a byte of data with 80% chance of being correct most wouldn't have a problem.

Spectre can be solved by making sure speculative information isn't leaked. That one can do timing attacks via caches isn't a problem then as the information bandwidth is severely decreased and isn't a problem in most cases.


The current attack is measured in seconds, but even several magnitudes worse is still a problem, especially if the attacker knows exactly what to look for (ie crypto material). If speculation improved execution performance by 30%, that's something an attacker can measure and information WILL get leaked. One way to decrease the bandwidth of this side channel leak is to make speculation less efficient (ie a 1% difference is more difficult to measure than 30%), but that's the opposite of it's goal, hence the problem and reason it can't easily be fixed. In theory, you could try to make time itself a secret. For a closed system, one could lie to the software about the time it took to run and therefor hide that information from the attacker, but the attacker could still use a remote time source and beyond this there are likely local peripherals (like video card) that can effectively leak accurate timing information.

In short, I think it's going to be extremely difficult to overcome the statistical correlation between speculative data prediction and timing. Fortunately only certain code patterns are vulnerable via spectre, but IMHO it will prove extremely difficult for CPUs to systematically solve those cases short of disabling speculation and even caching in those contexts.


One way to do this would ensure that speculative data reads can't influence non-speculative data, doing a cache fill due to a speculative read will not cause non-speculative data to be removed.
Dedicated resources for speculative data preferably on each cache level would mean there are no visible flow of data due to speculation.


Yes, but even once a CPU places boundaries on cache, timing leaks are still a fundamental problem.

(Sorry for the wall of text!)


Not at all, this topic deserves to be discussed at length. But I'm afraid there may not be a satisfying resolution this time ;)

Edited 2018-01-22 22:05 UTC

Reply Score: 3

Megol Member since:
2011-04-11

Megol,

"While true you don't state the general case: the only way no information leakage can occur is that an algorithm processing some data always have to take the same time, always require the same amount of power, always load each part of the computer in the same way and always leak the same electromagnetic signals.


Yes, these kinds of physical side effects have long been known to leak information about time dependent algorithms like RSA. CRTs were notorious for RF leaks. Probably even keyboards emit RF. CPUs less so, because of large capacitors and such high frequencies. But the thing is these physical side effects generally require physical attacks. Whereas the spectre attack is achieved with pure software.
"

Yes. But it's still a local attack in a sense in that one have to run on the same processor as the attacked process.

The Javascript version is actually the biggest security problem as a remote injection into some webside can break out of a known software-based "jail". Semi-remote or semi-local? Something. ;)


"The problem with speculative exploits isn't as much that it leaks information but how fast and how exact that information is leaked. If it'd take 100 years to leak a byte of data with 80% chance of being correct most wouldn't have a problem.

Spectre can be solved by making sure speculative information isn't leaked. That one can do timing attacks via caches isn't a problem then as the information bandwidth is severely decreased and isn't a problem in most cases.


The current attack is measured in seconds, but even several magnitudes worse is still a problem, especially if the attacker knows exactly what to look for (ie crypto material). If speculation improved execution performance by 30%, that's something an attacker can measure and information WILL get leaked. One way to decrease the bandwidth of this side channel leak is to make speculation less efficient (ie a 1% difference is more difficult to measure than 30%), but that's the opposite of it's goal, hence the problem and reason it can't easily be fixed. In theory, you could try to make time itself a secret. For a closed system, one could lie to the software about the time it took to run and therefor hide that information from the attacker, but the attacker could still use a remote time source and beyond this there are likely local peripherals (like video card) that can effectively leak accurate timing information.
"

I agree.


In short, I think it's going to be extremely difficult to overcome the statistical correlation between speculative data prediction and timing. Fortunately only certain code patterns are vulnerable via spectre, but IMHO it will prove extremely difficult for CPUs to systematically solve those cases short of disabling speculation and even caching in those contexts.


Disabling both of them would be very slow as we are talking about ~400 clocks to do a main memory access.

Disabling lower cache levels or having some special treatment of some code sequences could be possible however normal languages aren't suitable for that. Yet anyway.


"One way to do this would ensure that speculative data reads can't influence non-speculative data, doing a cache fill due to a speculative read will not cause non-speculative data to be removed.
Dedicated resources for speculative data preferably on each cache level would mean there are no visible flow of data due to speculation.


Yes, but even once a CPU places boundaries on cache, timing leaks are still a fundamental problem.
"

Again it's impossible to remove all leaks. Not exposing speculative data for any other thread of execution (NB not necessarily an OS thread) would limit leakage significantly but yes, it's still there.

Something like this makes it less likely for two dependent speculative instructions to make cache effects visible to others but it's still possible. After an instruction with a cache effect becomes non-speculative the state have to be updated to be as the instruction did a normal read which loaded a cache block - so the result of the speculative chunk as a whole is still visible to others!

Can't see a way around that in anything close to standard processor design.


"(Sorry for the wall of text!)


Not at all, this topic deserves to be discussed at length. But I'm afraid there may not be a satisfying resolution this time ;)
"

If ever. :<

Reply Score: 2

Alfman Member since:
2011-01-28

Megol,

The Javascript version is actually the biggest security problem as a remote injection into some webside can break out of a known software-based "jail". Semi-remote or semi-local? Something.


You know what, google, would be in an excellent position execute an attack. Their ads and beacons are so prevalent across the web that the odds are extremely high that a target's machine is running google's javascript code.

It would be highly illegal of course, but just think of the possibilities if google exploited their access. They could spy on prosecutors or obtain secrets from business competitors and even politicians. The attack leaves no traces on the target and the network connections are routine outbound connections initiated by the user's own machine. With traffic over HTTPS, it'd be even harder even for qualified IT to notice anything is amiss.

Reply Score: 2

Alfman Member since:
2011-01-28

I just read this: regarding the recent exploits, intel is recommending partners stop applying the CPU updates until further testing due to stability issues.

https://newsroom.intel.com/news/root-cause-of-reboot-issue-identifie...

...we are updating our guidance for customers and partners:

We recommend that OEMs, cloud service providers, system manufacturers, software vendors and end users stop deployment of current versions, as they may introduce higher than expected reboots and other unpredictable system behavior. For the full list of platforms, see the Intel.com Security Center site.

Reply Score: 3

Sidux Member since:
2015-03-10

There may be some choices..
1. Russia (after the incident with USA based on software privacy issues they redeemed the Elbrus line). Last I checked the recent 8S release was show cased last year in a functioning state.
2. China (they're already working on their own CPU architecture after the incident with Microsoft and Inetl products no longer being deemed "safe" by chinese government).

Not saying that any of these will be opensource but there are alternatives (from hardware perspective).
Main problem remains with software availability and compatibility.

Reply Score: 3

Bill Shooter of Bul Member since:
2006-07-14

Tens of millions is off by a couple orders of magnitude.

Reply Score: 3

oiaohm Member since:
2009-05-30

Tens of millions dollars not the case.

Risc-v is from 120nm to 7nm. TSMC 16nm has been used a few times with Risc-v in 2017.
https://cseweb.ucsd.edu/~mbtaylor/papers/Celerity_CARRV_2017_paper.p...
The above example is a small batch 16nm production in 2017.

So at 16nm small volume cost is not horribly bad we are talking well under the 1 million dollar figure.

Reason for jumping over 10nm and straight to 7nm is TSMC is after items to produce to test out their new production lines.

The reality was Risc-v production was at 16 nm in 2017. Risc-v production will be at 7 nm in 2018 at least for some projects.

The reality is with the automation turning out Boomv2 on 28nm was only 2 personal. One being technical and one being management. Even the 16nm was only a team of 4.

One of the shocking effects is due to Risc-v being open hardware and using a newer tool called Chisel the cost of doing a production chip in personal is way lower.

https://riscv.org/wp-content/uploads/2015/01/riscv-chisel-tutorial-b...

Basically a job that would require hundreds of staff with the automation chisel does only requires a handful. Its quite surprising how much of the cost of silicon design was humans doing tasks that could be automated.

Reply Score: 4

Open source doesn't address root cause ..
by Iapx432 on Sat 20th Jan 2018 17:23 UTC
Iapx432
Member since:
2017-09-30

How does open source address the root cause of Spectre and Meltdown? The same performance strategies (speculative / out of order execution) could just as likely have been employed by engineers working on open source. Linux used a monolithic kernel that is probably inherently less secure than a micro kernel and did so for feasibility / performance reasons. Same mission goal trade offs will happen no matter what the governing IP license. Agreed the ME secret Minix running at -1 security level would not have happened in secret. So I am all for Open Source, but let's be realistic about what it fixes and does not fix.

Reply Score: 4

Open Source X86
by Darkmage on Sat 20th Jan 2018 18:00 UTC
Darkmage
Member since:
2006-10-20

As far as I'm aware open source X86 will be viable in two years time when AMD64 expires. Pretty sure all the features since then that have been patented are for cpu extensions and anyone can develop a vector maths co-processor unit to handle a lot of that lifting. Things like alternative memory interconnect types shouldn't be that important. Can someone who actually knows about cpu design jump in? In terms of getting that magic 99% of software to work, the base amd64 extensions should be enough. Heck X86 32-bit is already patent free.

Reply Score: 3

RE: Open Source X86
by Megol on Sat 20th Jan 2018 19:44 UTC in reply to "Open Source X86"
Megol Member since:
2011-04-11

As far as I'm aware open source X86 will be viable in two years time when AMD64 expires. Pretty sure all the features since then that have been patented are for cpu extensions and anyone can develop a vector maths co-processor unit to handle a lot of that lifting. Things like alternative memory interconnect types shouldn't be that important. Can someone who actually knows about cpu design jump in? In terms of getting that magic 99% of software to work, the base amd64 extensions should be enough. Heck X86 32-bit is already patent free.


It could be problematic to create non-compatible extensions for several reasons including encoding space. The VEX encoding and similar is still patented and will be patented for a while.

X86 is also a complicated target. Intel, AMD and VIA have learned how to avoid corner cases and handle quirks without paying with too much performance/efficiency. Any newcomer will have several years before they begin understanding how to implement things properly.

The best bet would be doing a translation based design and decode x86 instructions to an internal instruction set. That would still be harder than doing something else. Transmeta didn't succeed in the market, partially because things were more complicated than they thought at first.

RISC-V have succeeded in getting support from more groups than any other open processor design, in fact that is one of the problems with it IMHO: the extensibility and willingness to extend the ISA can lead to a family of only partially compatible ISAs.

Still think that RISC-V is the only real alternative in the near future. It's a boring design though.

Reply Score: 3

Comment by Drumhellar
by Drumhellar on Sat 20th Jan 2018 19:53 UTC
Drumhellar
Member since:
2005-07-12

No, it isn't time, and for a couple of reasons:

1) The many-eyes fallacy - we know from recent experience that many eyes don't prevent major bugs - Heartbleed, shellshock, etc, all major bugs that many eyes didn't discovery. The fact that virtually all architectures with this type of speculative execution is vulnerable means a significant share of people who are actual capable of understanding the tech were involved, and all independently designed a variety of different processors that are susceptible to the same vulnerability.

2) Open source software means an intrepid developer can release his own patch, and everybody can patch their software against vulnerabilities whether or not the patch is accepted upstream. This isn't something that applies to processors. You can't patch a processor in the same way. Even engineering a patch would take an exorbitant amount of work from a large team, as the whole processor design likely has to be considered.


Now, processor design can benefit from open
source - Intel's AMT stuff and similar technologies should definitely have the specs open, and the software running on them should at least be user replaceable, if not open source on their own. But, actual silicon design just wont' see the same benefits of open source that software does.

Reply Score: 3

RE: Comment by Drumhellar
by kwan_e on Sun 21st Jan 2018 00:53 UTC in reply to "Comment by Drumhellar"
kwan_e Member since:
2007-02-18

1) The many-eyes fallacy - we know from recent experience that many eyes don't prevent major bugs - Heartbleed, shellshock, etc, all major bugs that many eyes didn't discovery.


How would they have been discovered if the source weren't open for some of those many-eyes to analyse? And how quickly were the patched up after being discovered?

You can't make the argument "there was a problem, so open source didn't work." You have to compare to the baseline of how such a scenario would have played out if they weren't many-eyes to find the problem,

2) Open source software means an intrepid developer can release his own patch, and everybody can patch their software against vulnerabilities whether or not the patch is accepted upstream. This isn't something that applies to processors. You can't patch a processor in the same way.


That's not a problem with open source. That's just a chicken and egg problem. You can't patch a processor in the same way because those processors weren't designed in an open way which would allow people interested in the problem to work on it.

Reply Score: 5

v RE[2]:
by gotocaca on Sun 21st Jan 2018 03:58 UTC in reply to "RE: Comment by Drumhellar"
Comment by Dasher42
by Dasher42 on Sat 20th Jan 2018 20:12 UTC
Dasher42
Member since:
2007-04-05

It would behoove us to have an open source architecture more widely in use, if nothing else to mix things up in the embedded space. RISC-V seems well underway there, and the real benefit would be to have a useful, power-efficient SoC for all sorts of purpose. Can you imagine that running Contiki OS?

Obviously that would make a nice alternative to the firmware currently running our x86 systems. If one needed a reasonably fast x86 processor, I'd think an instruction translation unit for x86-64 melded to an OpenSPARC core would be interesting.

I'd like to see a solid RISC-V core with an ample FPGA setup! Hello hardware acceleration!

Edited 2018-01-20 20:14 UTC

Reply Score: 3

RE: Comment by Dasher42
by rom508 on Sat 20th Jan 2018 22:04 UTC in reply to "Comment by Dasher42"
rom508 Member since:
2007-04-20

OK so aside from simply being open, what would be the technical advantages of RISC-V compared to other established ISAs? If we ignore licensing cost, are you saying RISC-V would be faster and more power-efficient than ARM CPUs? I'm not a hardware engineer, but I very much doubt it. I suspect there is interest in RISC-V but mainly from companies that want to save money on ARM licensing. They will contribute just enough to get a CPU design for their specific needs, but they won't drive this technology forward, since it wouldn't make them any money. If you ship this CPU only in your disk drives, why spend money on R&D for use cases in data centers? And because RISC-V uses BSD license, how many of these companies will release their designs/modifications?

Reply Score: 2

RE[2]: Comment by Dasher42
by Dasher42 on Sun 21st Jan 2018 02:55 UTC in reply to "RE: Comment by Dasher42"
Dasher42 Member since:
2007-04-05

The advantage to RISC-V is that it's a very cleanly designed architecture, with a capable base and several optional extensions, that lends itself to highly power-efficient, small, simple designs. It's not going to compete with x86 or POWER anytime soon, but I'm thinking there are plenty of niches for it to dominate.

Reply Score: 5

RE[3]: Comment by Dasher42
by Alfman on Sun 21st Jan 2018 04:11 UTC in reply to "RE[2]: Comment by Dasher42"
Alfman Member since:
2011-01-28

Dasher42,

The advantage to RISC-V is that it's a very cleanly designed architecture, with a capable base and several optional extensions, that lends itself to highly power-efficient, small, simple designs. It's not going to compete with x86 or POWER anytime soon, but I'm thinking there are plenty of niches for it to dominate.


For my needs (fast servers), I'd love to have cleaner architectures like RISC-V, but I need the high performance and commodity pricing that generally only comes with scales of economy. I believe most of the market won't budge until new architectures can beat the support, price and performance of incumbent technology. The problem is that it's difficult to deliver any of these up front: widespread support is non-existent before popularity, prices are high before scales of economy, performance will suffer before having access to the best fabs.

Maybe you are right that RISC-V can come and fill a niche that is being ignored, you've got to start somehwere after all. What would some of those niches be? It may not be fair, but even if RISC-V is the better architecture there are large economic challenges to overcome before the world may be ready to fund it significantly.

Edit: bare in mind that I actually want alternatives to succeed. But if we can not overcome these challenges somehow, then RISC-V could end up another market failure. How do we solve this?

Edited 2018-01-21 04:22 UTC

Reply Score: 3

RE[4]: Comment by Dasher42
by Dasher42 on Sun 21st Jan 2018 14:41 UTC in reply to "RE[3]: Comment by Dasher42"
Dasher42 Member since:
2007-04-05

Fair enough! You're in the domain of the most performance per watt, at scale, with server farms. Here's where I see RISC-V getting its start:

Embedded microcontrollers and sensors. This is the kind of market where tiny versions of 68k processors are still prolific. I'm thinkining smart grid, automation, that sort of thing.

Firmware. We need an open-source version. Further, some emerging architectures build processing units into parts of the ISA, like inline with the RAM, and de-centralize it. Why not RISC-V?

From there, one can imagine smartphones coming into play. ARM is alright, but I think the Meltdown/Spectre moment is coming for these highly integrated phone/broadband chipsets. RISC-V could have a role to play here.

By this point in evolution, more performant and parallelized RISC-V implementations could crack the server farms, starting small and working up.

Just my forecast.

Reply Score: 4

RE[3]: Comment by Dasher42
by dionicio on Mon 22nd Jan 2018 17:15 UTC in reply to "RE[2]: Comment by Dasher42"
dionicio Member since:
2006-07-12

IoT is waiting for this...

Reply Score: 2

Of course riscv is the future
by rener on Sat 20th Jan 2018 23:01 UTC
rener
Member since:
2006-02-27

and in the meantime I have fun playing with my R10000 mips64 Sgi Octane: https://www.youtube.com/watch?v=AU_RV8uoTIo ;-)

Edited 2018-01-20 23:01 UTC

Reply Score: 2

RE: Of course riscv is the future
by rener on Sun 21st Jan 2018 08:26 UTC in reply to "Of course riscv is the future"
rener Member since:
2006-02-27

and here is the mips64, r10k Sgi Octane video I was working on: https://www.youtube.com/watch?v=AU_RV8uoTIo

Reply Score: 1

RE: Of course riscv is the future
by dionicio on Mon 22nd Jan 2018 17:30 UTC in reply to "Of course riscv is the future"
dionicio Member since:
2006-07-12

Mira el trompito! Mira el trompito! Guille! Guille! ;)

Reply Score: 2

sparc cpu
by marc.collin on Sun 21st Jan 2018 02:53 UTC
marc.collin
Member since:
2012-08-03

like article said there are already some who exist....

big problem is the money need to made cpu

Edited 2018-01-21 02:56 UTC

Reply Score: 2

Open CPU to all at some point.
by oiaohm on Sun 21st Jan 2018 05:47 UTC
oiaohm
Member since:
2009-05-30

Its like Western digital switching to Risc-V for a lot of things. This saves Western digital a lot of payments.

Risc-v has already been tapped out at 7nm.

We know the absolute smallest is 0.1 nm and that is a single carbon atom. Single silicon atom is 0.2 nm.

Even with carbon it is a question if using electric based chips if 1nm can be passed.

The thing to wake up is when we hit the limit there will not need to be the massive on going investment making new fabs. Instead it will then come optimising and cost cutting.

Reply Score: 3

lsatenstein Member since:
2006-04-07

The problem manufacturers are facing is crosstalk.
When lines are at 7nm, there is a problem of induction causing cross talk.

As the cells drop in size, so must the cpu voltage. And silicon or germanium conductivity problems arise.
Perhaps it will be a reality, but I think it will not be for several years, perhaps even a decade

Reply Score: 2

JLF65 Member since:
2005-07-06

When lines are at 7nm, there is a problem of induction causing cross talk.


And at half that size, electron tunneling starts to become significant. Tunneling is already an issue at current sizes for current leakage, leading to higher power usage.

Reply Score: 3

Fools
by Brendan on Sun 21st Jan 2018 21:56 UTC
Brendan
Member since:
2005-11-16

Hi,

For CPUs there's 2 very different categories. There's high performance CPUs (e.g. with out-of-order, speculative execution, etc) which have become so complex that making them open source wouldn't make any difference at all (because almost nobody would be able to fully understand them or find the "security vulnerability needle in the haystack" even if they bothered to look). For these CPUs the only thing open source would do is increase the cost of hardware (e.g. manufacturers like Intel going nuts with extensions and patents to protect their ability to fund investment in the R&D needed to continue improving performance).

Then there's low performance CPUs (typically embedded in things like microwave ovens, hard disk controllers, etc). For these CPUs it'd make no difference (for security) if they're open source or not because they're so simple that there wasn't a security problem in the first place. The only thing "open source" does in this case is save the manufacturer a small amount of $$ (e.g. not having to pay someone like ARM a small licencing fee). This is exactly what we're seeing for RISC-V - Nvidia embedding it into proprietary GPUs (and using it for job control, with proprietary firmware) to save themselves a little $$; and Western Digital embedding it into some hard disks (with proprietary firmware) to save themselves a little $$.

Mostly the article is marketing hype - open source advocates using fear (created by spectre/meltdown) to peddle snake oil to fools.

- Brendan

Reply Score: 3

RE: Fools
by Alfman on Sun 21st Jan 2018 23:35 UTC in reply to "Fools"
Alfman Member since:
2011-01-28

Brendan,

Mostly the article is marketing hype - open source advocates using fear (created by spectre/meltdown) to peddle snake oil to fools.


I agree with you that the chip manufacturers may have trouble finding ways to make open hardware fit in their business models. However your last point went downhill. Open hardware advocates are most certainly not "peddling snake oil to fools", that's an insult. You should be fair and admit the call for open hardware existed long before spectre & meltdown. While these recent incidents did bring new media attention to the problem, the desire for open hardware existed long before this.

For me personally, I've long wanted control over my CPU's proprietary management processor. It was revealed in the past year intel CPUs with AMT/vPro had some pretty serious vulnerabilities that remained open for about a decade. Openness is not just about security though, there's a lot of potential for owners to make better use of their processors than intel offers. For instance: I'd prefer the AMT to be accessed through a VPN. I would have implemented this feature myself if intel didn't block me from doing so on my machines. This would add considerable security over intel's stock software, but of course intel's CPUs are cryptographically locked to its closed & proprietary software.

Going beyond intel, another issue with proprietary hardware is being dependent upon vendors for any updates (ie drivers/firmware). I've encountered this problem repeatedly and it infuriates me knowing that the manufacturers will neither fix it themselves nor allow others to fix it for them. Oh how I wish open hardware advocates were in a much stronger position to demand openness from all our hardware vendors.

Reply Score: 3

RE: Fools
by dionicio on Mon 22nd Jan 2018 17:43 UTC in reply to "Fools"
dionicio Member since:
2006-07-12

"The only thing "open source" does in this case is save the manufacturer a small amount of $$ "

Every Coder like to build over firm foundations, think of energy plants, electric grids, hospitals, banks, etc.

Whatever needing of massive performance, not going to to choose open RISC, but open MISC parallel computing arch.

Reply Score: 2

Open Source is not "the" remedy
by DeepThought on Mon 22nd Jan 2018 06:52 UTC
DeepThought
Member since:
2010-07-17

Whenever security flaws are detected there is a common cry for Open Source.
But there is nothing "hidden" in the latest security flaws detected. The behavior of the CPUs (x86, ARM, Power) is well described and could be exploited.
ARM manual even state that a speculative cache filling is not considered a security problem!

Maybe the ARM architecture is not "Open Source", but there are at least three companies with an Architecture License of ARMv8-A (AFAIK: Apple, Qualcomm, Nvidia).
All these companies have massive CPU know-how and did not add a protection against these exploits.

So how would "Open Source" help?

Reply Score: 2

kwan_e Member since:
2007-02-18

Whenever security flaws are detected there is a common cry for Open Source.
But there is nothing "hidden" in the latest security flaws detected.


What does that even mean? By that logic, no security flaws are ever hidden because they are discovered, and thus they are not hidden when we find out about it.

The behavior of the CPUs (x86, ARM, Power) is well described and could be exploited.


That has nothing to do with something being open source. Your understanding of the term is screwy.

All these companies have massive CPU know-how and did not add a protection against these exploits.

So how would "Open Source" help?


What a stupid question.

"It wasn't open, and that did not help. How would opening it up help?"

Reply Score: 3

DeepThought Member since:
2010-07-17

"Whenever security flaws are detected there is a common cry for Open Source.
But there is nothing "hidden" in the latest security flaws detected.


What does that even mean? By that logic, no security flaws are ever hidden because they are discovered, and thus they are not hidden when we find out about it.
"

There are security problems that are hidden because special feature of software or hardware are kept secret or the source code is closed.

For Spectre/Meltdown it is not the case. The exploit use well documented features of modern out-of-order CPUs.


"All these companies have massive CPU know-how and did not add a protection against these exploits.

So how would "Open Source" help?


What a stupid question.

"It wasn't open, and that did not help. How would opening it up help?"
"

How "open" is open? Even though the Linux kernel is "open" there are only a few people in the world to understand parts and even less everything of it.
The ARM architecture is close to the wider public, but at least a few engineers have full understanding of it.
Even though it was open to them, they did not see the problems.
Or, as I cited from ARM documentation, did not consider it a security problem.

Reply Score: 3

oiaohm Member since:
2009-05-30

How "open" is open? Even though the Linux kernel is "open" there are only a few people in the world to understand parts and even less everything of it.
The ARM architecture is close to the wider public, but at least a few engineers have full understanding of it.
Even though it was open to them, they did not see the problems.
Or, as I cited from ARM documentation, did not consider it a security problem.


This is the problem. Risc-v there are more people who full understand its cores than who have full understanding of ARM cores.

Interesting enough Risc-v boom cores were designed without the defects as well as like Qualcomm out of order arm64 chips. So not everyone making ARM chips agreed with ARM that it was not a problem. Qualcomm redid there out of order starting from an A55 very much like how boom is done from risc-v rocket.

The Linux kernel is insanely complex beast due to the number of platforms it in fact support.

We are starting to see risc-v design in parts that address issues that were not able to be worked around when using arm or x86 in multi core.

Maybe there is advantage opening up your ISA and CPU design to universities who in their silicon design courses will dissect it over and over again and have a Monmouth number of people who really know hows it works.

Risc-V being open hardware cannot not be compared to Linux Kernel. You don't have universities as course work dissecting the Linux kernel over and over again like Risc-v.

Reply Score: 3

DeepThought Member since:
2010-07-17


Maybe there is advantage opening up your ISA and CPU design to universities who in their silicon design courses will dissect it over and over again and have a Monmouth number of people who really know hows it works.


I agree (for the moment). If RISC-V gets as complex as an Cortex-A75 then it might stop.

Reply Score: 2

dionicio Member since:
2006-07-12

Thanks oiaohm. Knew the problem was known, or at least suspected, even at Intel.

Stakeholder pampering, maybe :/ Probable cause of INACTION.

Reply Score: 2

dionicio Member since:
2006-07-12

Massive kernels are not the future. Hardware to evolve around this.

Reply Score: 2

Alfman Member since:
2011-01-28

DeepThought,

For Spectre/Meltdown it is not the case. The exploit use well documented features of modern out-of-order CPUs.



If I recall they have said cache timing was out of scope in the context of an ASLR address leak. But that's really a different beast than meltdown and spectre. I'm not aware of any documentation that would even imply the meltdown behavior, I'd honestly be surprised if it were explicitly documented. If it is though can you cite exactly what you are referring to here so that we can read it?

Reply Score: 3

Open CPU could be closer than one think.
by oiaohm on Mon 22nd Jan 2018 10:33 UTC
oiaohm
Member since:
2009-05-30

https://www.youtube.com/watch?v=f-b4QOzMyfU

The video above is a planned 7nm Risc-v with 16 primary cores and 4096 other cores.

Do watch this at 19.15 mark and note their plan include doing GPU in the 4096 cores. So Risc-v ISA for everything. All those core are to be boom v2 based that is out of order.

So this is going to be a very interesting chip of 2018. This could also explain why Intel is paying AMD for a GPU to embedded with their cpus.

https://www.youtube.com/watch?v=toc2GxL4RyA
This the boom v2 at 28nm 2 people 2 months and chips produced. Also note that the rocket chip that is the in-order shares core design with the boom v2.

https://www.youtube.com/watch?v=ZOGpNTn1tpw
This label system also very interest it fixes the problem why you cannot do dependable real-time on arm, x86... Multi core and it already tested with Risc-v.

Also it gets interesting when you wake up the person design Risc-v vector extensions also designed AVX for Intel.


Of course none of this has the current intel, amd, arm cpu design issues.

Reply Score: 3

zima Member since:
2005-07-06

Interesting regarding "doing GPU in the 4096 cores" ...so it seems that RISC-V will successfully do what Intel tried, and failed, with Larabee.

Edited 2018-01-23 17:36 UTC

Reply Score: 3

Mill architecture
by dsmogor on Mon 22nd Jan 2018 15:53 UTC
dsmogor
Member since:
2005-09-01

The whole discussion have lead me to presentations on the Mill architecture that made my weekend.
They support software speculation and have found out that their compiler ( or rather last stage program loader/specializer) was indeed affected by variant 2 of spectre. As result they have fixed the problem in the compiler.
How cool is that?

Edited 2018-01-22 15:53 UTC

Reply Score: 2

RE: Mill architecture
by jockm on Mon 22nd Jan 2018 16:44 UTC in reply to "Mill architecture"
jockm Member since:
2012-12-22

Mill is really interesting on paper, and the people behind it have impressive pedigrees; but to my knowledge there is no implementations (hard or soft) that independant people can test. As far as I can see, but the promised 2017 demo didn't happen; though I would love to be proved wrong on that

I keep an eye on them but until there is some independent analysis I wouldn't get too excited.

Reply Score: 3

RE[2]: Mill architecture
by dsmogor on Thu 25th Jan 2018 17:21 UTC in reply to "RE: Mill architecture"
dsmogor Member since:
2005-09-01

Still reading their papers and watching videos is very intellectualy refreshing.

Reply Score: 2

Yes!
by icicle on Mon 22nd Jan 2018 18:22 UTC
icicle
Member since:
2013-12-07

Open processors are a wonderful idea!

Reply Score: 1

but NO
by user78 on Mon 22nd Jan 2018 22:52 UTC
user78
Member since:
2011-07-06

open source hardware is way worst than proprietary....linux community have no idea they have hackers working with them

Reply Score: 0

RE: but NO
by kwan_e on Mon 22nd Jan 2018 23:29 UTC in reply to "but NO"
kwan_e Member since:
2007-02-18

open source hardware is way worst than proprietary....linux community have no idea they have hackers working with them


You do know that open source projects such as Linux do have source control, right? So even if they may have hackers working with them (and your use of the word in this context implies you're a bit of a numpty), we can see the source they contribute and figure out what it's trying to do.

Reply Score: 4

RE[2]: but NO
by Megol on Wed 24th Jan 2018 16:05 UTC in reply to "RE: but NO"
Megol Member since:
2011-04-11

"open source hardware is way worst than proprietary....linux community have no idea they have hackers working with them


You do know that open source projects such as Linux do have source control, right? So even if they may have hackers working with them (and your use of the word in this context implies you're a bit of a numpty), we can see the source they contribute and figure out what it's trying to do.
"

There are several examples that illustrate that the many eyes -> shallow bugs idea is not true.

OpenSSH is a good example that most technical people remember. But it isn't the only one.

Properly executed a weakness can be introduced by an undercover operator "accidentally" doing something that can be used later.
E.g. if someone in a TLA have detected the possibility of using a Spectre type attack just writing a piece of a code so that GCC will compile it to a known weak spot -> success!

And nobody would blame the operative so they can keep contributing together with the other aliases of that person or group.

Reply Score: 2

RE: but NO
by jockm on Tue 23rd Jan 2018 01:41 UTC in reply to "but NO"
jockm Member since:
2012-12-22

Care to provide some proof?

Reply Score: 3

Comment by Phloptical
by Phloptical on Tue 23rd Jan 2018 13:01 UTC
Phloptical
Member since:
2006-10-10

Sure you can open source a design on paper, but that’s only half the battle, the other half that Intel develops are the means and methods of manufacturing that chip in thousands and thousands of hours of testing and validation, and millions of dollars of equipment tooling and process testing. Who is going to do that on a design that they don’t own in propriety?

Semiconductor manufacturing is a massive undertaking. It’s not easy. It’s amazing that they have the control and quality over their products that they do. Intel and AMD deserve to be well paid for their efforts, regardless of this spectre issue.

Reply Score: 1

RE: Comment by Phloptical
by kwan_e on Tue 23rd Jan 2018 13:50 UTC in reply to "Comment by Phloptical"
kwan_e Member since:
2007-02-18

It’s amazing that they have the control and quality over their products that they do.


Not really.

https://en.wikipedia.org/wiki/Product_binning

Reply Score: 3