In the wake of the recent Meltdown and Spectre vulnerabilities, it’s worth spending some time looking at root causes. Both of these vulnerabilities involved processors speculatively executing instructions past some kind of access check and allowing the attacker to observe the results via a side channel. The features that led to these vulnerabilities, along with several others, were added to let C programmers continue to believe they were programming in a low-level language, when this hasn’t been the case for decades.
Processor vendors are not alone in this. Those of us working on C/C++ compilers have also participated.
It’s just not the lowest possible.
But C is a low level language because it can adequately represent the widest range of hardware without too much abstractions. People forget that C is also portable to work on microcontrollers, which don’t have any of that fancy cache or speculative execution.
I do not agree. The term “low level languages” usually are reserved for languages with little or no abstraction with regards to the underlying hardware. This mostly means we use it for machine code or assembly language (https://en.wikipedia.org/wiki/Low-level_programming_language)
C, on the other hand, has pretty much abstraction if you think about it. Just because C allows you to handle pointers and because you have to do your own memory management in C doesn’t mean it is low level by itself. Things like for-loops, switch-case-constructions, etc are indicative of high level languages. You can code in C without knowing pretty much anything about the hardware it is running on. Even when coding memory management and creating your own data structures you don’t need to know if the platform is little- or big-endian for example. So I wouldn’t say that C is a low level language.
But compared to a scripting language (for example TCL), C seems pretty low-level. But that is because among high-level languages there is a wide range of the abstractions provided.
Remember that it is not the floor that determines if something is low level, it is the ceiling. Does a language have advanced constructs allowing for/providing a high level of abstraction? Then it is a high level language (perhaps providing/giving the option of accessing low-level functionality). If not, it is a candidate for being a low level language. C has a big span between its floor and ceiling. This is one of the reasons it is popular among many programmers.
The opposite is not true though. There are no low-level languages providing high-level functionality (then that would make them high-level languages!)
You claiming that C is a low level language is like claiming every screw is a nail, just because you can use a hammer to drive the screw into a piece of wood (just as you could use C to write code that is very close to the metal). But then you are only looking at the “floor” of the language. Nothing wrong with that, but those are not the constructs in the language which would give it the “high level”-label.
That’s a bad argument, since both nails and screws occupy the same level of “abstraction” as each other. A screw isn’t a high level joiner and a nail isn’t a low level joiner. A screw provides more things than a nail does, but it is still in direct contact with the things it joins. Just like C is with the machine.
Ok, I will stop using this analogy since you chose to interpret it as something to do with “how close it is to the hardware” rather than what I intended to mean: what you CAN do and MAY versus what you MUST do and what you MAY NOT do.
In C, you CAN program close to the hardware. In assembly languages you MUST program close to the hardware. In C you MAY program very abstract without knowing much about the underlying architecture. In assembly languages you MAY NOT ignore the underlying architecture.
There is an interesting class, though and that is byte-code produced by many languages compilers, running on a virtual machine. You could write that byte code directly, and not knowing the underlying hardware. You still need to know how the virtual machine it runs on though. So yes, I acknowledge that throwing virtualization into the mix makes the discussion harder. Still, I don’t think many people would consider byte-code to be high-level languages.
Doesn’t matter “how many people”. By your definition, they would be high level languages. You can’t excuse yourself from your own argument when it works against you.
LOL. This very article that we are discussing and some are disputing. Kind of like a circular proof, right?
Edited 2018-05-22 15:01 UTC
Ever stopped and thought “maybe I am the problem?”
All the time. But then I’m reminded of every other time I say basically the same thing or the same initial tone to other people and they don’t automatically find a gross misrepresentation to take offence to like you and he do.
You, of course, never thinks it’s your problem.
Why don’t you find a few people, here even, and show them my initial statements that caused offence to him or you and ask them if your offence taken was reasonable in any way.
Edited 2018-05-22 22:35 UTC
So lets get this straight.
* You make a rather contentious statement. (Bonus points if you are insulting whole groups of developers)
* Someone says “Hey that really isn’t right dude”.
* Claim you are right.
* Call people a snowflake.
It’s happened several times now, not just with me. I’ve seen you do it to other people as well.
Also it is rich calling people snowflakes considering you literally lost it once I said I worked in Gambling.
This is called cognitive dissonance and you are so far up your backside you can’t see it.
Edited 2018-05-23 18:18 UTC
I was insulting you. FFS you’re dense.
Like I said, get a second opinion. Ask someone about those two paragraphs that you claim were insulting about whole groups of developers.
Your reaction to those two paragraphs are the biggest snowflakey reaction.
TASM has for loops and switch statements. There are lots of modern assemblers with high level features as well, even some with OOP constructs. Does that make them high-level languages? If you ignore those particular features of the assembler, its still just straight 1-1 mnemonics… Does just adding a few features like this change the nature of the language completely?
I’m just saying, I think portable languages are a breed apart, its kind of pointless to bring up assembly in a discussion about the nature of the C language, as assembly is just a completely different thing.
Now that is a tough question I must admit. Add enough new features that allows programming on a high level of abstraction and that language could perhaps be considered a high-level programming language. The hard part is answering: how much is enough?
A few features that does not change the way you write programs in a significant way is probably not enough. Adding enough of those features and you would probably have an entirely new language.
I don’t have a good answer to that. But I promise to sleep on it and see if I get wiser.
Don’t you primarily think in your 1st language?… (also, that’s basically the test of fluency, whether we can think in a given language)
Just like Forth, Basic and Pascal, nothing special about C here other than having more vendors supporting it.
I don’t recall any BASIC being able to handle pointers, and my time with Pascal was marred by 256-byte maximum arrays…
Sigh… If you want to play that game you could argue that most dialects of BASIC are even more low-level than C, since they have PEEK and POKE (https://en.wikipedia.org/wiki/PEEK_and_POKE) to directly look at respectively change things in the memory. Now we are not talking about instructions anymore, we are talking about knowing what memory cells controls specific functions. Without going into microcode it is hard to get much closer to the hardware.
Again, it is not the low-level functions that determine if a language is or isn’t a high-level language it is if it offers high-level abstractions.
Whadda ya mean? I don’t see how PEEK and POKE are any more level than, say, changing the IOBYTE by just going *3=0;
If your compiler’s a bit fussy, maybe you have to do *(char *)3=0;
Well, one thing is a language allowing you to play around with pointers and possibly no bounds-checking. The other is a language actively offering two functions that peek into a certain memory cell, especially those mapping to hardware registers, respectively a function to poke around to change the values of these memory cells.
But of course you are right, in practice the effects are essentially the same.
And the reply was really to counter a statement “that BASIC doesn’t have pointers” as if that in some way meant that you couldn’t mess around with hardware. They just handle it differently in BASIC.
But with the arguments earlier regarding language design (on the topic low-level vs high-level languages), wouldn’t a language offering these two functions be something that tell us it is a low-level languages? For me the answer is no, since I found the reasoning to be flawed from the beginning. I should have been more clear, sorry.
BBC BASIC had ! (read/write words) and ? (read/write bytes). One of the things that made BBC BASIC so powerful (at the time) was that you could call OS/SYS calls directly.
Sinclair BASIC had PEEK and POKE, if I recall correctly.
Getting the machines to do anything useful was basically an exercise in pretending to write BASIC but actually writing machine code.
Nearly all the BASICS for the 8-bit computers had PEEK and POKE statements, you could embed machine code in Atari Basic strings and execute the machine code from Atari Basic itself.
VARPTR
You may want a refresher. Pascal allows much longer arrays. It was the String type that was limited to 255 bytes (plus one byte to store the length). But you could write your own String class with a much larger array.
Regular Basic could handle pointer like manipulations via PEEK and POKE.
Other Basic dialects had explicit pointer types as convenience.
Did you stop using Pascal on CP/M?
Because even Turbo Pascal for MS-DOS had more than 256 bytes, as the index can be any enumeration or integer type.
Also C arrays are limited by pointer size, which can be whatever size, including being marred by 256-byte on tiny 8 bit CPUs.
The point is that ssize_t for the architecture is guaranteed by the standard to be able to represent largest memory object for that architecture, even if it is an ill advised thing to have really large arrays on the stack.
I heard someone once describe C as a “high level assembler”.
Personally I think if you _only_ call assembler a low level language when even have the terms low level and high level? Why not just say assembler Vs. actual programming languages?
I remember reading about language generations in the context of “modern” so called dynamic languages like javascript and PHP around 15 years ago. I don’t precisely remember which generations they where, but I recall something like C/C++ are 4th generation languages, while C#, javascript and PHP are all 5th generation languages. None of them are really low level, but each is basically more abstract (and convenient) than it’s predecessor.
Oh hey, there’s a wikipedia article on it:
https://en.wikipedia.org/wiki/Programming_language_generations
So by this information, what I had heard is wrong – C, C++, C#, Java, and JavaScript are all 4th generation languages. Still none of them low level.
You can’t seriously argue that C is low level here, but then a few days ago you were arguing that modern c is nothing like the C people used 30 years ago…
Would you at least agree that modern usage of C is not low level?
Modern usage of C on microcontrollers with no cache, no speculative execution, no branch prediction, no operating system?
As long as C can be compiled to target the bare hardware, it is low level.
Superscalar, Speculative execution, branch prediction, etc, are all micro architectural techniques that are not (or should not be in any sane implementation) exposed to the programming model. As far as C goes, a microcontroller and an aggressive out of order core look the same.
Sounds like some serious gatekeeping here, the colloquial expression of low-level is used to distinguish C and other compiled software from interpreted software (python,javascript…)
Actually, there’s a fair bit of disagreement over the definition of “low-level language”.
In fact, via /r/rust/, I was recently introduced to an ACM Queue article named “C Is Not a Low-level Language: Your computer is not a fast PDP-11.”
https://queue.acm.org/detail.cfm?id=3212479
The gist of the article is that, with the advent of heavily microcoded superscalar processors, even x86 assembly is too abstracted away from how the CPU actually functions to properly be called a low-level language.
Either way, compiled vs. interpreted is secondary. Compiled LISP or Haskell will still be high-level languages because, even by the generous definition used by Wikipedia, they don’t satisfy “provides little or no abstraction from a computer’s instruction set architecture”.
https://en.wikipedia.org/wiki/Low-level_programming_language
Edited 2018-05-22 11:09 UTC
Thanks for showing me that article, a thought provoking one. The advancements in computers technology, computer science and language design has blurred the lines over the years, that is for sure.
When I think of high-level vs low level languages I tend to think if my thoughts can be expressed in the language without knowing what platform it will run on.
If I code in C, Java, Smalltalk, Scheme, Prolog, Perl, TCL or any of the other languages I have been using over the last few decades I can focus on the algorithm and data without knowing if it runs on a certain processor specific machine configuration (within certain limits of course).
If I had to program in assembly or, worse, microcode (which I have done once as a fun lab experiment) I need also to know more about my underlying hardware to be able to correctly express myself. Just knowing the language is not enough.
Also, the things I code in high-level languages tend to be quite portable and can often be compiled or interpreted on different machines with different architectures with minimal or no changes. Something not possible with code written in a low level language.
As for the interpreted vs compiled question, you are spot on. That is another classification that runs parallel with the question of high vs low level languages.
In the end, I don’t think it really matters that much. I have a lot of different tools in my toolbox, each to be used when most appropriate.
Depends, on what the instruction set architecture actually means.
On mainframes, with micro-coded CPUs, compiled LISP would translate directly to the instruction set architecture.
Congratulations– You provided a link to the article this topic was posted about.
Well done!
Aside from that, what the article is really saying is that there is no such thing as a low level language anymore, because they crafted a definition which excludes all languages included assembly.
*facepalm*
Sorry. It was late when I responded to this and I got mixed up about which OSNews tab had the link to the ACM Queue article.
(In fact, given how dozy I was, I might have even not yet checked where the link went and mis-remembered it as pointing at another article.)
Sleep-deprivation is one hell of a drug.
Edited 2018-05-23 04:48 UTC
Been there, done that. Had just gotten out of one of “those” meetings, so I may have used excessive snark.
Most (in fact almost all) the arguments in the document against C can be made for Assembly as well. For example they both have the same flat memory model, where cache is not explicitly managed. They both assume serial execution even when the internals of the CPU are much more complex.
Modern fast C code requires using intrinsic functions that directly map to CPU primitives. For example if you are running a very tight loop for a hotspot of a serving code, then you need to manually unroll your loops, order your data so that array portions fit in cache lines, give explicit instructions to manage cache line, etc. You would need to do the same thing in assembly. So in this manner I still C as a low level language.
However for 99% of your code you don’t need to write a program as if you were targeting PS3 PSUs. Modern profilers will tell exactly which portions of your code need attention, and then you put on the “low level hat”, and manually optimize caching, use AVX instructions, and manage other details to make sure you get the maximum performance of your CPU architecture. For the rest the compiler does a very good job.
Not any more, since atomic instructions and fences are available, which basically admits the possibility of out-of-order execution. C++ and even C has now updated its abstract machine to admit the existence of concurrent execution with threads of some kind.
Yes. The article is basically arguing the native instruction sets of modern CPU is not a low-level programming language…
It is not completely false, but it is not something you should be saying outside of a speculative article like this, since it would just be confusing matters.
And more to the point. Attempts to make new types of ISA has generally failed, not just at penetrating the market, but even at being faster. It turns out those abstractions basically makes the CPU do the last level of optimizations for itself at runtime, which it can do much better than any compiler.
Yeah, but I think the overall point gets lost attacking C. The overall point is that we should be designing languages that map well to the hardware, rather than augmenting the hardware to accommodate languages that are 40 years removed from the days when they had a strong mapping to actual hardware.
This is holding back hardware design. This little story ran on OSNews a week or so back: http://pharr.org/matt/blog/2018/04/18/ispc-origins.html. It was the story of how the author wrote a new language that had a better mapping to vector hardware. Rather the hoping that the C compiler could automagically detect parallelism (which turned out to not be reliable) he designed a new C-like language that was directly exposed a subset of the underlying architectural parallelism.
But he wasn’t working in a vacuum, he followed the pattern of GPU languages, where shaders run on multiple vertices at once on multiple GPU units.
I think the fundamental problem however isn’t a language problem. Writing languages that explicitly support hardware level parallelism via threads and vector primitives is relatively easy. Mapping our problems to these languages is hard. Not every problem is embarrassingly parallelizable.
Large sets of problems consist of doing math, in a linear series of steps where the next step depends on the previous, and you’ve only got one long stream of data to process. If the compiler or CPU can pick out a few adds in a complex expression that can be run in parallel, great, but I am not sure how you design a programming language to explicitly support this sort of ability in serial data streams.
But, I think we could be doing a lot better than C – the Thread as a language primitive sucks, and we could perhaps build in hints or language features that expose underlying machine level parallelism, if present.
You do realize of course that your code is going to require a machine with a 32-core processor with 96 gigs of memory to approach the speed of an Apple II from the 1980’s….
The term ‘low-level language’ is usually used incorrectly anyway. It’s not a binary thing, there’s varying levels of abstraction in different languages, so you’re not purely ‘high-level’ or ‘low-level’, but somewhere on a spectrum from maximal abstraction to minimal.
If you want to be really picky, even most modern assembly languages aren’t truly low-level, as they quite often have numerous ‘instructions’ that do not translate to unique machine code (for example, in MSP430 assembly, the NOP opcode is actually a register move instruction that does nothing), and thus exist solely as abstraction to simplify things for the programmer.
Compared to assembly and C– (or even to older stuff like BCPL), C is a high-language, but, C is low-level compared to a vast majority of other widely used languages (it should be pretty obvious that it’s lower level than stuff like Python or Ruby, a bit less so compared to stuff like Lua or C#, and even less so compared to Objective C or C++). In fact, it’s about as low-level as you can get without needing to depend on what ISA you’re running on.
ahferroin7,
Exactly!
Q: Is C a high level language?
A: It depends on arbitrary criteria which don’t have any material significance.
I agree with the article that processors have evolved into extremely complex state machines with speculative execution engines in order to maximize single threaded performance. And I agree that this has come at the expense of other evolutionary paths. However the whole premise of the article that this has anything whatsoever to do with C or high level/low level languages is wrong and I think it’s unfortunate that the article took this line of reasoning. It creates divisive arguments for no good reason.
In light of these vulnerabilities that exploit speculative execution, we should take a look at CPU primitives and ask ourselves if we wouldn’t be better off with other more efficient ways of producing parallelism, such as VLIW. The distraction of C being high level or low level doesn’t even need to come up!
Both software developers and CPU manufacturers are stuck in a chicken and egg problem. Software developers keep writing software for traditional CPU models. And CPU manufactures have trouble selling CPUs that existing software can’t benefit from. The security issues with speculative execution could be a catalyst for change. Furthermore there is a chance that things would be different today if the industry attempted to build vector processors again. Vector computing (aka open-cl GPGPU, etc) are really taking off. Also compilers are becoming much better at auto-vectorization, which could bring more software developers on board. So conceivably a crossover could succeed at this point even though it has failed in the past.
On the other hand both desktop and mobile markets are quite mature now and the term “good enough” is thrown around a lot. Does it make sense for major industry players to open their checkbooks to make it happen? While a lot of money has been spent trying, no one, not even intel has been able to dethrone the venerable x86.
Edited 2018-05-22 14:12 UTC
I think that while high-level versus low-level doesn’t matter as much as the article seems to imply, it matters at least a little for historical reasons.
In a traditionally ‘low-level’ language, there’s generally a stronger desire to be efficient than there is in many ‘high-level’ languages (part of this is because you really can be more efficient in most ‘low-level’ languages than in many ‘high-level’ languages). A good example of this difference in mentality can be seen in the fact that C++ has a pretty significant (and not entirely deserved) reputation for being horribly inefficient compared to C, which is largely because of people preferring to use the ‘easier’ and pretty obviously higher-level stuff like the Std::String (which is horribly inefficient because it does pretty much everything with dynamic allocations, so you usually have at least one malloc() call every time you call a function from it).
As a result of this comparative laissez faire attitude about efficiency, hardware has had to take up the slack to keep things running at a reasonable performance level, which has helped push this desire for maximizing performance in hardware instead of software (at least, in the consumer and enterprise markets).
ahferroin7,
Yeah, but the problem is it doesn’t belong in the context of this article. The author clearly thinks that single threaded superscalar CPU parallelism has come at the expense of better kinds of parallelism, and I think he’s right about that. I’ve been saying the same thing. But the way he goes about this in the article is extremely distracting and consequently almost all of our comments are focusing on this non sequitur argument about C being low level or not. It simply doesn’t matter if C is low level or not. we should be debating whether it’s time we make a significant shift away from speculative execution to other kinds of parallelism.
This is not language specific at all, for this to succeed, all sequential languages (even assembly) could benefit from new concurrency constructs. High level languages were never the impediment here, quite contrary to the perceptions of the article, high concurrency can be a property of either high level or low level languages.
There are frameworks and language variants that are better suited to vector processing and high concurrency like openmp and c-linda. Unfortunately though nobody uses them and they remain unpopular.
Edited 2018-05-22 15:57 UTC
C forces you to focus on being an expert in it and its horrible syntactical needs. Also, you’re in an elite club that can cast disdain on higher level languages.
C is low level for all the incidental interpretations of low level.
As far as I remember C was designed to simplify mapping efficiently to machine code, such as the sequence of the FOR loop components. In the case of a Lisp Machine, Lisp is probably a much lower level than C. Forth will run very efficiently on a machine with two hardware stacks, but not so well on a machine with no stacks. It is distance from the machine the defines low level and both the machine and the language are variables. C is not such a low level language on a GPU vs on a CPU. I am sure this question could be solved statistically if someone had the time.
First, the article essentially says that there are no low level languages for x86, including assembly.
Unfortunately, this is because the author insists on defining a “low level language” as one that doesn’t abstract out the hardware.
In 40 years, I have yet to encounter a language more advanced than Assembly that meets this criteria.
In fact, let’s called “Assembly Language”, that is, a language made of mnemonic op codes specific to a processor, “A”.
Side note: An op code called “CMP” to compare two values, is not a low level operation by Chisnall’s definition– the operations performed by the CPU to do that comparison are themselves “hidden” from the programmer.
Let’s imagine for a moment, a language slightly more advanced than “A”, which supports pseudo-code– a standardized way of implementing “CMP”, or “CP” or “JMP”– Perhaps “if” or “copy” or “goto”. We might, for the sake of argument, call this “B”.
While it’s a useful language, in that we don’t need to remember op codes specific to each processor, or know a system’s memory map, or register set, it’s not really a full blown language, so it never came to exist.
Now let’s take that pseudo-code language “B”, and add structure, data types, functions, maybe a standard IO library. We might call that “C”.
And that’s what C is– About as close as you can get to programming on any given CPU without knowing exactly how that CPU operates. It should be considered the lowest level of a fully abstracted programming language available. As such, it became insanely popular because it was faster than BASIC or Pascal, or Fortran or Cobol, because it let you get as close as possible to the guts of the system, without having to write code for that specific system.
It’s a high performance language that requires some care — If you malloc(), don’t forget to free(), and don’t exit your subroutine before reclaiming resources.
So– reality is that C was never what Chisnall claims some people think it is. It doesn’t meet his definition of a low level language. It is, however, about as low level as most sane programmers want to get.
As for claiming that Spectre and Meltdown are all the fault of people writing C code, that’s just specious. From a security point of view, that sort of speculative branch prediction was badly designed to start with, and doesn’t play nice with today’s multi-threaded systems. Many manufacturers have spent the last two decades emphasizing speed over security, and Intel, Microsoft, AMD and others are all guilty.
Personally, having experienced low level programming on Z80 and 6502 Assembly language, and knowing how completely incompatible the two systems were, I’m left somewhat baffled by his article– does he want to go back to the days where you coded for specific hardware?
No, he can’t be that daft. Perhaps, instead, this is really an article about how convoluted x86 has become, and how we need better CPU architectures– and that seems to be a point near the end of the article, although ARM has similar speculative goofs in their design (although not as prevalent).
All in all, this seems to be a “x86 sucks because of C” article, and that’s such a gross oversimplification that I have difficulty taking him seriously.
Could not agree more. The whole thing is so deranged and the arguments are so stretched that there is no way to take them seriously.
Out of order and speculative execution came to exist because access to any memory outside the processor is slow and as so we needed a way to keep the cpu busy. It has nothing to do with ‘C’.
Also, he seems to believe that parallelism is something easy and, somehow, ‘C’ is a kind of block for it. WTF, most applications have no need or would be actually badly affected by parallelism inside them. Where there is good opportunity to parallelism and it is, relatively, easy to be put on use it already is, i.e., by OS multitasking, asynchronous IO and some math algorithms.
The only thing about right is that modern processors are not super-fast PDP-11.
C and C++ are low level languages. Show me any serious software written in the language from the author’s wet dream – there is none. Who would use such thing seriously ? You would be tied to single chip model, so the solution would be fragile and expensive like unix pipes (I think the authors were smoking something) – small change in you system or input data format and the whole project is dead. Most of the time I can compile and run same piece of code on MIPS, ARMv7, ARMv8, x86 and x64 or even something that people throw away in the trash such as PowerPC. When the author ends implementing their software with help of the nude virgins form the utopia land, their original chip target chip will be in the local dumpsters or museums, the competition will be already several years on the market and any finite set of students will be able to maintain such product with reasonable cost.
The whole “article” is so filled with fundamental mistakes that it isn’t worth to comment on in detail. Crap.
The “C” language is lower-level than Python, Ruby, Java, etc.
The “C” language is higher-level than assembly language.
That Chisnall article got me thinking about the CISC vs RISC scenario and links such as these
https://riscv.org/2018/01/more-secure-world-risc-v-isa/
https://forums.sifive.com/t/does-risc-v-dodge-the-plague/965
are encouraging on the RISC side of things in regards to Meltdown/Spectre sceanrio.
The notion of the scope of the Meltdown/Spectre problem coupled with the fact that this originates from the world’s main CPU designer/supplier is a bit “scary”; Intel does have massive research/design resources.
The RISC scenario (attitude ?) may be symbolic of a hint that the CISC (x86) world should slow down abit and concentrate more on quality in design/security of the CPU. As time goes on we are getting more network-connected and hardware-level security will be more important.
Personally, I would prefer a more secure-parallelisable-acting multi-core CPU than a less secure faster CPU. On the software side, the full realisation of concurrency may need a rethink. For example, using a pro-concurrency Erlang-like base (rather than a “C”-base) directly below the application layer.
The reality here is C being called a low level language has been excuses not to extend it to detect and process the worse parts.
Most people don’t look inside the crt object files in gcc to notice that int main() is even abstracted.
Optimisation engines inside most current day C compilers are fairly good match to a proof/lint system engines but we don’t regularly use those engines for that.
Yes we do, that’s called a compiler warning. GCC, Clang, and the Intel compilers provide far superior warnings to any classical lint program, and the clang static analyzer is far and away better than some traditional static analyzers (such as CppCheck).
From the preface of K & R Book
Make of that what you will.
It isn’t particularly relevant whether it is or it isn’t.
Edited 2018-05-23 19:34 UTC
Is C a low-level language? Yes.
Is Assembler (and its 50,000 variants) a low-level language? Yes.
However, the programming language is not the issue. The issue is that Intel (and AMD and others) have been trying to optimize the processors and no longer use Assembler as anything more than an interface to the real processor via a micro-code language that does all kinds of stuff to the code before it actually runs – apply out-of-order execution, etc – to try to make things run faster by predicting what the code will do before it does it.
Programmers generally know nothing about the micro-code.
Programming Languages generally know nothing about the micro-code.
Compilers know nothing about the micro-code.
The only interface to the micro-code is the processor’s Assembler which provides extremely limited manipulation of the micro-code itself – typically through CPU flags, no direct command-manipulation.
Comparatively, the micro-code is more akin to a virtual processor translating instructions to a completely different system that cannot be manipulated by the original instructions, akin to writing something in French but those reading the document only understand Mandarin Chinese and the micro-code does the translation from French to Mandarin Chinese.
Alternatively …. the issue is that compilers can’t generate microcode. I guess if they could then C would get some new “lower level” instructions and an OS compiled to microcode would be able to limit processor look aheads to that process’s authorized memory. I suspect memory architecture would need to change as well as microcode tends to be very wide and cant be cached etc.. But to be honest, I really haven’t a clue at that level of things.