“For years I’ve tried my damnedest to get away from C. Too simple, too many details to manage, too old and crufty, too low level. I’ve had intense and torrid love affairs with Java, C++, and Erlang. I’ve built things I’m proud of with all of them, and yet each has broken my heart. They’ve made promises they couldn’t keep, created cultures that focus on the wrong things, and made devastating tradeoffs that eventually make you suffer painfully. And I keep crawling back to C.”
The STL and Boost are just awesome, even when not using a OO programming style a C++ programmer has some very nice libraries available. Being able to use vectors is nice. Having many good libraries is nice:
http://en.wikipedia.org/wiki/Category:C%2B%2B_libraries
But otherwise, I mostly agree with the author. C has shortcomings, but they are well documented and tools are available to deal with them.
Edited 2013-01-11 00:07 UTC
C++11 is also great.
What are you talking about?! The STL and Boost are bloated and ugly; and worst of all they produce the most useless, verbose, garbage compile-time errors. (Admittedly the ugliness of those errors are partly due to C++’s template implementations, and STL and Boost are far and away the best set of libraries for C++)
The problem there is the lack of adequate tooling for C++. C++ templating should have been largely fixed by having Concepts. Too bad it didn’t make it into the standard.
I see little bloat in STL, although for a while I was misguided as well, and thought that STL was too bloated. It isn’t. STL is quite small and quite elegant; I would not be afraid of using it.
With Boost I can agree that it’s a bloated animal, but nobody says you should use Boost. It’s just too big, and sometimes you need a smart pointer.
But now, C++11 really comes in and saves the day. It does make the STL more efficient, and makes you need less features of Boost. It adds the smart pointers you need in the standard, as well as threading and timing. It’s quite nice, you should try it.
The errors are indeed awful; but maybe this is something that can be improved.
Hi,
As far as I can tell, for all programming languages the majority of problems/bugs are caused by people having trouble dealing with complexity.
The C++ way of dealing with problems caused by complexity is to add more complexity.
– Brendan
It’s not, as evidenced by the C++11 process. A lot of the effort was made to simplify the language. The “complexity” if you can call it that, mostly came from increasing library features.
Are you sure what are you talking about?
C++03:
——
vector<Object*> objs = get_objects();
for (vector<Object*>::const_iterator i = v.begin(); i != v.end(); ++v)
{
cout << (*i)->toString() << endl;
}
for (vector<Object*>::iterator i = v.begin(); i != v.end(); ++v)
delete *i;
C++11:
——
vector<shared_ptr<Object>> objs = get_objects();
for (auto& s : v)
{
cout << s->toString() << endl;
}
The C++11 stuff is easier to read, easier to program, does not need to deal with memory manually and is exception-safe.
Introducing range-based for loops, static type deduction, move semantics, shared pointers and a lot of interesting stuff, they simplified the language, made the libraries more efficient and they kept backwards compatibility.
Edited 2013-01-11 19:48 UTC
Did you just make code smaller than it needs to be just to make a point, as if lines matter
Not at all.
1. I replaced the classical “for loop” for the range based for loop, that hides complexity and improves reading a lot.
2. I used shared_ptr<Object> instead of Object*, so I do not need to iterate through all the vector elements to release them; the shared_ptr does the job.
3. Since the shared_ptr<Object> is a value type (storing a pointer), if an exception occurs, the vector will invoke the Object destructors automatically, making my code exception-safe.
All of that and my code is smaller
Am I missing something? Why didn’t you use the range for look in the original code as well, to compare apples to apples.
What you are missing is that the range-based for loop appeared in the C++11 standard.
You’re missing that this is comparing the old and new versions of the C++ standard. He doesn’t use the simplified syntax is the “original code” because that’s not possible – demonstrating the difference in syntax was the entire purpose of the post.
Vectors are nice (as are Lists) but those alone aren’t good enough reasons to use C++.
One thing that’s always confused me is the odd separation between the STL and language features. For example, why are Iterators an STL class rather than being a language construct, along with some syntactic sugar like a foreach statement?
One of the biggest reasons why C++ is the complicated language it is is due to the design principle they used that most of the heavy lifting be done in libraries.
The STL and TR/Boost libraries serve as a demonstration of C++ language features.
There’s also the other principle of “you don’t pay for what you don’t use”. Having library features as part of the language features could result in unwanted features being pulled into low level code. Having a separate library is a very strong signal that the inclusion of heavy weight stuff is intentional when it’s in code.
That explains my question regarding Iterators, but I still find the split rather odd and slightly arbitrary.
Of course my complaint is technically invalid since C++11 added syntactic sugar for foreach loops, so at least the syntax isn’t as ugly now.
It is because C++ is a library programing language. If you don’t need a specialized library, you probably shouldn’t be using C++. This is also the reason STL doesn’t matter, if you need STL you are using C++ wrong.
So the main reason for using C++ are libraries, except when you use them, in which case you’re using C++ wrong… 😉
Edited 2013-01-12 21:07 UTC
Needing containers (and now threads, with C++11) is wrong?
I really liked Smalltalk, but it was just too darn slow and unusable in the real world. Then, 10 years ago, I tried coding in Objective-C on Linux (with GNUstep) and I haven’t changed back ever again. Since it’s a pure extension on top of C, I can still write all of my low-level algorithms in C and the higher level constructs in Objective-C. And GNUstep (and OpenStep on which it’s based) is really a freakin’ great library. Even for non-graphical work it’s super simple:
1) There are two classes implementing collections (NSArray, NSDictionary), not a bajillion.
2) The garbage collector is extremely predictable (ref counts).
3) The class hierarchy is very shallow (at most 1-2 superclasses for 95% of the entire library), so it’s easy to memorize.
4) I can intermingle low-level concepts (e.g. sockets) with GNUstep constructs (NSFileHandle) and it works just fine.
5) The loose typing and built-in dynamic reflection makes many tasks super simple (e.g. -[NSArray makeObjectsPerformSelector:]).
All in all, the idea mix of high-level abstraction and low-level grunt for my taste.
Indeed, C with an elegant Smalltalk-like OO extension — you could do much worse.
And are you sure you don’t need other data structures? Like, you know, trees & such? Bitmaps?
You can write C++ code without ever using delete, using things like smart pointers (ref counted).
The point of a framework is to offer you features, not to be small. You don’t need to memorize it. You should always be able to access the documentation. Qt is this sort of framework that is elegant and can help you write clean code.
When did lack of features became such a great thing?
The author is spot on.
It is a well written article.
C++ does not force you to go OO, but it encourages it.
In large systems, OO will hurt you.
The older I get, the more I like C, and detest C++.
Read Scott Meyers on how you can get yourself in trouble with C++. There are hundreds of different pitfalls waiting to trap you.
In my experience it is the opposite; C++ deters people from proper object orientation. IMO, C++ is such a syntactical kludge that plenty of programmers I have worked with revert to procedural thinking inadvertently when coding in it.
I don’t think the object orientation is what makes large C++ projects hard to maintain.
Despite my C++ apologetics, I have to agree with the part about reverting to procedural thinking. However, this kind of thing is implied in Bjarne Stroustrup’s book.
Some things are just not suited to an OO style. I find OO best suited for defining ADTs. Frameworks should be a mixture of procedures and ADTs, not large object systems.
Nope.
It’s simply their size.
And this is a good thing
Uhm. I dare to disagree. Depends on what you call ‘large system’. It may be the feeling that OO causes bloat because it’s easy to get bloaty when you’re using OOP. But that doesn’t mean that it will happen.
At the core of OOP we have the association of data and the methods that work with that data, through encapsulation. I see nothing wrong with that. It does help you to be more productive, it makes more sense for larger systems to go OOP. OOP systems are simply easier to handle and to extend.
I have found Noel Llopis’ Data-oriented design paper to be an eye opener. http://gamesfromwithin.com/data-oriented-design
With OO, it is easy to end up with a menagerie of objects all referring to each other with deep complexities. With data oriented design you tend to end up with flatter architectures and fewer dependencies.
I’m a game programmer, and I like his advice on how you could code most of your game engine subsystems as you would code your particle system.
With C, I feel more easily guided to data oriented, structures-of-arrays, instead of the OO and arrays of structures/objects.
bram,
That’s an interesting link. From a technical standpoint I don’t think anything is inherently bad with C++ features. However there’s no doubt that it encourages rather different approaches to software design.
C programs often use no abstractions whatsoever and will directly call the external libraries and kernel.
C++ facilitates rich levels of abstraction, which is generally a selling point for developers to choose it over C, and yet these very abstractions can be responsible for adding many more inefficient layers than we typically find in C programs.
OOP interfaces significantly help with contract-oriented programming in teams and help make problems much more manageable. I think a good OOP programmer will know where to draw the line without going crazy about everything needing to be a proper object.
There’s no reason a high performance game should not be written in C++, just be mindful of too much indirection in critical loops.
Edited 2013-01-11 06:37 UTC
I think that the author’s enthusiasm in the productivity arena is somewhat misguided. I love C but I don’t find it as productive as C++ (especially when compared with C++11). It may be a personal feeling, but the OOP support that C++ offers makes you more productive, makes it easier to build higher scale applications.
And when you go into the C# development, comparing C with C# in the area of productivity is just mean towards C. I’m glad the author didn’t insist much on the productivity part, because that would’ve been wrong. C is less productive, but it has all the other things that the author highlights.
C++ is too big for low level stuff and too complicated for higher level stuff. It’s almost impossible to master the language… really really hard (at least for stupid people like me… and We’re the 99% xD).
I think the smart way to go is plain old C for system level stuff and Java or C# for user level stuff.
Android is the perfect example of this kind of mixture. Profit.
Why are you using all the bells and whistles of C++ for the low level stuff?
Qt seems to be doing all right.
Mastering the entire language is only really necessary if you’re creating libraries for wide consumption. The difficult C++ features are there to achieve generalization while still retaining static pre-compile type checking.
Mastering the entire language is also necessary when working in teams or dealing with code somebody else wrote (so this affects perhaps 90% of programmers). Some people think it’s necessary to use every feature language provides.
But 99% of the time you work with code created by another people!
C++ is so big that you are always learning the language, wasting precious time that can be used solving the actual problem.
Don’t you consider precious the time you are learning something new?
I don’t think that’s true. I’ve dived through the code of C++ libraries like Qt, Boost, STL, scene graphs and game engines, and they haven’t really used language features that aren’t in common use.
Because Java never results in over-engineering and complexity….
It depends whether it has the word “Enterprise” attached to it.
Ah yes, truer words has seldom been spoken. The magic word for immense Java suckiness: “Enterprise”.
Heck, it carries the same magic in any programming language.
Yes.
I have seen examples of J2EE in C, Perl, C++, Java, C#.
You just need to let a few enterprise architects free reign.
Normally it comes about by mandated a particular technique should be used throughout the system regardless to it relevancy.
Even the stupidest people, like myself, can read Java code and get a general idea of what it does. It’s a very simple and beautiful language to read.
Over engineering is not a language problem. You can over-engenier in assembler if you want.
The problem with C++ is the language complexity itself, It’s difficult with or without over engineering.
I understand written English but I do not understand German at all.
<sarcasm>
Let’s blame the “German language designers” for creating a language that not all people can understand.
</sarcasm>
It’s easy to mock, but his point was not unreasonable – Java is a much simpler language than C++, without templates, pointers, etc. A non-Java programmer looking to read Java code is going to have a much easier job understanding what he sees, than a non-C++ programmer looking to read C++ code.
Java has generics. It even looks like C++ template syntax. Except with Java generics, you lose type information so you can’t write:
if( obj instanceof HashMap<String, String> )
You have to write:
if( obj instanceof HashMap<?, ?> )
In my experience, Java’s over reliance on inheritance makes the behaviour of Java code very hard to figure out just by looking at it. In general I find dependence on a debugger to contribute more to unreadable design than anything else.
I am more than happy to never touched C again since 2001.
Even clang and gcc are now done in C++.
Maybe you did not put a .c extension to your code; but C++ is an extension of C; so, you need to know C concepts (pointers, arrays, etc, pointers to functions, struts, mallocs, etc.)
.
When I code in C++ I take advantage of what C++ offers.
Even the C like code takes advantage of C++ improvements over plain old C.
Uhmm, no. It’s just a language. Bloatness and overcomplication is caused by the coder doing all the wrong things.
All in all, while I’m not old myself, I still I’m fairly “weird” in not trusting coders who don’t know c and/or c++. If they know, but choose to use something else for a specific task/project, and they have an explainable reason why, that’s OK with me. Not knowing it, or not using it for reasons like the quote, is a no-go.
I don’t know C++ or C. I always find it infuriating that this attitude still exists that you must know C/C++ to be “good”.
If someone values designing code using the object oriented paradigm (OOP) then it would be expected this same person would use a language that has explicit/stronger support for OOP features (e.g. C++) rather than spending time with a language having weaker support for these features (e.g. ad-hoc simulation of OOP features in C that are rigidly implemented in C++). The issues are if OOP is applicable to a project and if the coder(s) have decent design/knowledge skills to realise any relevance of OOP for the respective coding project.
I find OOP-based design, at the application level, to be a natural stance for dealing with software-coding challenges. I suppose kernel/embedded-level code is another scenario for which my stances may not necessarity be applicable.
C++ does not force a specific form of OOP.
It provides a platform for the production of OOP code according to a style/complexity paradigm maintained/enforced by the coder.
The existence of any deleterious level of “type complexity” and “interface interdependency” would more reflect a poor code design and issues with the associated human software designers/implementers. Sure, the C++ language contains the facilities to produce “deleterious” code but the language itself does not force a person to code in a deleterious fashion. Inadequate design/knowledge skills, in the context of the project, are to blame for this. Sometimes coders get out of their depth (e.g. impatient, temporal-based deadlines, etc.) and create badly designed/implemented code.
The author imparts a “scary” scenario for the existence of “complex types”. We should not forget that these “complex” types (i.e. C++ classes,which should not be designed as being difficult to use) manage the complexity of the code base through OOP-related mechanisms such as encapsulation, polymorphism, interface visibility, etc. Instances of C++ classes (i.e. C++ objects) are “fun” to use.
It’s about the design.
If “bloated” frameworks/libraries exist and are deemed a “bad” idea, it is not the fault of the language but the fault of the software designer(s).
Sure C++ is not perfect, but I never think about going back to C for my application-level libraries/executables. Minimally, C++ can be used as a “better” C in those procedural-type non-OOP programs.
Anyway, if someone chooses to stick with C … that’s fine.
If someone chooses to stick with C++ … that’s also fine.
At the end of the day we all have differently wired minds that handle complexity in their own special way and it is up to the individual to select a software language they are comfortable with in order to solve the respective software coding challenges.
That’s all folks.
The author is no doubt enthusiastic about C, but there are quite a few things he gets wrong.
At the time C was developed, even during the 80’s, there were languages which had better compilation speeds like Turbo Pascal and Modula-2.
The UNIX guys also decided to ignore the languages of their time, which provided already better type checking than C does.
C main weakness:
– no modules
– no way to namespace identifiers besides 70’s like hacks
– null terminated strings are a open door to security exploits
– the way arrays decay into pointers ditto
– weak type checking
– pointer aliasing forbids certain types of optimizations
C developers used to complain about Pascal type safety in the 80, but if you security conscious:
– Read MISRA C
– Enable all warnings as errors
– Use static analyzers as part of the build process
Funny enough with those steps C gets to be almost as safe as the Pascal family of languages.
C needs to get replaced for us to get better security, as well as its Objective-C and C++ descendants.
The latter do offer more secure language constructs, but they are undermined by the C constructs they also support.
Even Ritchie did recognize that C has a few issues:
http://cm.bell-labs.com/who/dmr/chist.html
If UNIX did not get copied in all universities all over the world in the early 80’s, C would just be another language in history books.
There are more weaknesses: very poor standard library which doesn’t even contain a hash map or safe string handling..
Oh cry me a river.
And there is no way you could possibly be wrong about that!
Sure it has them, though presuming you didn’t know about them, you have no idea what “static” on globals and functions means.
Namespaces are a mixed bag. Sometimes they might seem useful, but more often than not I’ve seen them abused and make simple problems a lot more complex. Also, they are a nightmare to debug, especially when combining code from various sources (e.g. library linking) – name mangling is a linker’s nightmare. It seems that huge projects, like the Linux kernel for instance, seem to work just fine without them.
I think you’re more complaining about a lack of automatic range checking. Sometimes it’s useful, sometimes it isn’t (mainly by introducing invisible glue code that can mess up some assumptions, e.g. atomicity).
Compared to what? For my tastes the type checking in C is pretty strong.
“restrict” has been in C since 1999. Your complaint is 13 years out of date.
They ask for money, no thanks. (NASA coding guidelines are an interesting read though.)
Not all warnings are errors and while it helps in development, it’s very bad for e.g. libraries to ship code with -Werror enabled, since a change in compiler versions can introduce new/different checks or obsolete build options and thus make break your build.
lint has been in Unix since V7 (1979).
Something stinks with the notable stench of “smug”.
It is true that C gives you greater freedom in certain things and that includes shooting yourself in the foot. No question about it. On the other hand, certain things are much easier and better done in C (low-level data crunching routines).
And there he claims, among other things, that C type specifications are richer than Pascal’s, directly contradicting your earlier statement. Be careful who you cite to support your case.
Looks like somebody’s got an axe to grind.
Then why did they not use Algol 60 or PL/I, which were the system programming languages of the time?
It is has not.
Separate compilation is a primitive form of modules, but it is not the same.
Sure we could also keep on using the original UNIX, why care about progress?
Forget about the NULL character, boom!
Arrays should not be manipulated as pointers.
As for the usual C argument about arrays bound checking, all modern languages that compile to native code allow for selective disable of bounds checking if required.
Almost every other static type language out there?
Except, like register, the compiler is free to ignore it and besides gcc and clang many C vendors still don’t fully implement C99.
In our society people tend to get money for their work.
You can always turn off false positives.
Sure, but:
– UNIX is just one OS among many
– Being available does not mean developers use it
Like in many other languages.
Sure, when compared with the original ISO Pascal. The ISO Extended Pascal as any other Pascal dialects are more expressive than C.
UNIX is a good operating system, but it is not the god of operating systems.
Because, if you had even read the few Wikipedia pages on C, you’d know that C derives from Algol 60. The line is roughly this: Algol -> CPL -> BCPL -> B -> C. Each of those steps had its reasons and I’m not going to list them here. Your attempt to portrait the designers of C as conceited merely shows that you carry a personal grudge that *your* language of choice didn’t become ubiquitous (I presume it’s Pascal, since you mentioned it so fondly – I’ve met a few FreePascal enthusiasts and they are argue quite similarly to you).
As for PL/I – it was first published in 1964 when the predecessors of C were already fairly underway in their own life, and it was a proprietary product of IBM (whereas C was designed at Bell Labs). Your romantic view of the wide availability of quality computer languages in the 60s is totally missing the reality of development back then.
Either they are modules (albeit in primitive form), or they aren’t. You’ve managed to contradict yourself in a single sentence – good job. It’s also sweet that you try to declare your definition of a module as authoritative. Have you ever considered that perhaps not everybody agrees with your definition?
Wow, FreePascal on Windows user, I presume. The aura of your smugness makes my screen damp. Simply because something is old doesn’t mean it’s bad.
Why do you think that “more features” = “progress”? Natural languages, for instance, often times simplify their grammar through more use (e.g. the regularization of the past-tense form of verbs in English).
NULL is actually a macro typically defined to ((void*)0), but I understand if your case-insensitive eyes don’t see the difference.
Offhand remarks get offhand responses.
Yes, your personal opinion on the matter is really insightful.
This would be easy enough to introduce into C as well, say, by introducing an “array” keyword in variable definitions (which would turn on dynamic range checks). No need to redesign the language from the ground up. Might be nice to have, no dispute there.
This is a pretty slimy tactic to try and shift the burden of proof onto me. I’d now have to provide an extensive list to show that other static type checking languages out there have more relaxed rules, never mind even how one would qualify what “weak” and “strong” static type checking means. I don’t play this way. You claimed C type checking is weak, it’s yours to prove, not mine to disprove.
Oh my, is support really that bad?! Oh wait, you just pulled that one straight out of your behind:
https://en.wikipedia.org/wiki/C99#Implementations
Mind you, GCC still isn’t fully C99 compatible: http://gcc.gnu.org/gcc-4.7/c99status.html so even your “GCC and clang” statement is false. Do you even google before you assert something?
That’s not what I meant. I mean is that some random dude on the Internet isn’t going to persuade me to spend money on a product that he insinuates will “fix” my coding practices.
Obviously you’ve never built any bigger product somebody else coded from source, have you? I don’t have the time to comb through somebody else’s build tools to find which options fail and how to disable them.
Other platforms have other static analyzers. But UNIX has been the platform of birth for C, so I just wanted to show you that what you say here is not news to native C coders.
So you think forcing people is the proper approach? Have you ever considered that some people might not like that? Of course not, you are beyond error (see “arrays shouldn’t be treated as pointers” comment above).
What I was talking about is the contrast between highly abstract languages (Objective-C’s OO part, Java, etc.) and lower abstraction languages (C). Of course you can write data crunching in other low-level languages (even Assembly for that matter).
Except that ISO Extended Pascal didn’t exist when Kernighan and Ritchie designed C, so your citation of Ritchie to support your case misses the chronological order in which these languages appeared. According to Wikipedia, the first Pascal compiler in the US was written for the PDP-11, much later than UNIX appeared and probably even after C.
I never said so, but it’s history is intertwined with C, so knowing about UNIX gives you a good idea how C developed.
Well yes.
I never used FreePascal though.
I am old enough to have used C and Pascal when the first compilers were being made available in ZX Spectrums.
I didn’t say they were top quality, only better than C.
According to epoch documentation many companies did license languages on those days.
I only care about the Computer Science definition and C does not offer that.
http://en.wikipedia.org/wiki/Modular_programming
Never used Free Pascal.
Windows is just another operating system among many.
Usually more features tend to improve programmer’s productivity.
Well, actually English grammar was originally simplified thanks to the Normand occupation:
http://geoffboxell.tripod.com/words.htm
I have seen enough code like str[index] = NULL to make that statement.
Not everyone writes str[index] = ‘\0’, specially the ones that like to turn off warnings.
Too many scars of off-shoring projects.
I imagine any security expert would agree, but I might be wrong.
Should I now give a CS lecture about classes of static typing?
I am well aware of that Wikipedia page.
There many more C vendors than what Wikipedia lists and many customers don’t let you choose the compiler to use.
That assertion was because many think that gcc and clang are the only compilers that matter.
Well, have you ever done a 300+ developers multi-site project?
To expert C coders, you mean.
No, I am just a meaningless person in this world.
Just stating my opinion.
Point taken.
Agreed.
And here we go with the blanket statements again, asserted without proof.
Yep, they did, but Bell Labs obviously didn’t feel the needed to.
Oh really? Then I quote from the linked Wikipedia page you linked:
Since 1989 C has had rigorous support for this kind of compartmentalization (.h interface files, .c implementation files) and virtually every single project I’ve ever laid eyes on has used it. Of course it’s rudimentary, but there are higher constructs on top which provide most, if not all of the features you would expect.
By that logic, more controls on a car means better/safer/more productive driving. In actuality, Einstein’s famous (possibly apocryphal) quote captures reality much better: “Everything should be made as simple as possible, but not simpler.”
I said ‘often times’, not ‘always’. Obviously at times languages expand in complexity to incorporate new necessary concepts and there’s nothing wrong with that.
These programmers deserve a kick in the nuts for the above construct. This will produce warnings (implicit cast of void * to char) and is indicative of the fact that the author doesn’t actually understand how computers work. I personally prefer str[index] = 0; Same meaning, less text, clearer.
Ah well, if you buy cheap product, don’t be surprised when it turns out to be shoddy. I have the same experience with offshore. Having a “safer” language means they will just provide you with dumber code monkeys.
Security is not a simple yes/no game – the most secure computer is one that is off. It’s all about finding middle ground. Some code warrants your approach, some doesn’t. Making blanket statements, however, will guarantee that at times you will throw the baby out with the bathwater.
If you can’t support your claims, don’t make them. But before you dig into it, take a look at: https://en.wikipedia.org/wiki/Strong_typing#Variation_across_program…
So any statements you present will most likely just express your personal opinions on the matter. Oh and I have an MS in CS, so I’ve heard them before (including “C sux”, “C rocks” and “Let’s code everything in Prolog”).
If your hands our bound by your customer, then I suspect you have other problems in your project, not just with the language.
I was talking about things like X.org, KDE, the Linux kernel, Illumos, etc. These are humongous code bases with tons of external dependencies and when doing a project that uses them, I don’t have the time to go through each and every piece and fix a maintainer’s bad assumptions about build environments, often times just to test a solution. That’s why I said -Werror is good for development, bad for distribution.
Agree, it’s probably news to you. GCC, for instance, supports -W and -Wall, both of which together activate lots of helpful static code analysis in GCC (unused variables, weird casts, automatic sign extensions, etc.).
No problem there – when you clearly state something as personal opinion, I have no problem. It’s only the assertions and blanket statements that make my blood boil. We could probably understand each other over a beer much better than over the Intertubes.
Sure asserted without proof.
It is based on my understanding what a better language is, by looking at the epoch documentation that I had access to, during my degree.
For me, a languages that allow the developer to focus on the task at hand are always better than those that require them to manipulate every little detail.
Usually that detail manipulation is only required in a very small percentage of the code base.
In the end I guess it is a matter of discussing ice cream brands.
And I one with focus in compiler design and distributed computing.
Usually static languages with strong typing don’t allow for implicit conversion among types, forcing the developers to cast in such cases.
Overflow and underflow are also considered errors, instead of being undefined like in C.
Pointers and arrays are also not compatible, unless you take the base address of the array.
Enumerations are their own type and do not convert implicitly to numeric values, like in C.
Well let me quote another section of the article.
Somehow I don’t see C listed as a language that supports modules.
Sure you can do modular programming by separate compilation, but that is not the same as having language support for it. I have done it for years.
For example, in some languages that support modules, the compiler has builtin linker and is also able to check dependencies automatically and only compile required modules.
The types are also cross-checked across modules. In C,
some linkers don’t complain if the extern definition and the real one don’t match.
In the Fortune 500 corporate world, usually there isn’t too much developer freedom.
I am aware of it.
As I answered in another thread, I have done C programming between 1992 and 2001, with multiple compilers and operating systems.
Since then, most of the projects I work on rely on another languages.
[/q]
Yeah, it would be surely a better way to discuss this.
Essentially what you’re describing is limiting freedom in what you can do in a certain language, so that a programmer has no chance of running into trouble. Sometimes it’s a good thing, sometimes it’s slowing you down. The lack of pointers in Java, for instance, has frequently tied my hands down, e.g. I can’t pass a subarray as a zero-cost pointer to a subroutine to work on. Instead, I either have to change the interface to include an offset index, or create a copy (potentially a huge performance penalty if the operation is trivial). If I want to make sure that the callee doesn’t modify it, I must copy it. In C, I’d simply make it a pointer to const.
Sure is.
All of what you describe are limits on what a developer can do. At times it’s sensible to limit them, sometimes it’s simply throwing hurdles his or her way. For a CRM system, or a web app, fine, it’s sensible not to manipulate pointers – that isn’t performance critical code. But not everywhere. I have no problem with you saying ‘sometimes/most often these are not necessary’. The problem is when you assert that C is essentially a stupid, pointless language and that everything else that you like is better. I know it’s tough to swallow, but there are painfully few OS kernels written in Pascal, and probably for good reason.
That’s merely because the author didn’t follow his/her own definition. The logical structure of such an argument is:
1) If language X has feature Y, then has support for modular programming
2) Here are the languages I know of that meet criterion 1
All the author did was miss another language that clearly meets their own criteria for modular programming.
This is a compiler/build infrastructure feature, not a language feature. For instance, Sun’s javac doesn’t do that (just tested it), yet Java clearly fits the definition. So for all matters, this is a pointless criterion.
While it is possible to have poorly written code where interface declarations are completely torn away from their own implementations, the correct practice is to #include your own interface files when making the implementation, exactly to provide this check. Again, your complaint is at least 20 years out of date (this was true in K&R C and other pre-C89 dialects which lacked proper interface declarations).
Interestingly enough, Multics, whose developers included Ritchie and Thompson, was written in PL/1 and used one of the first non-IBM compilers for the language. PL/1 was by all accounts a product of ‘design by committee’ and compilers for it which supported the entire language were difficult to implement and required high end computers at the time.
“Simple and Expressive”
In what way is C simple? Take someone who’s got a good background and mind for programming and show them C. See how long it takes them to actually understand how to define a function to pass to qsort() and bsearch() and then get the syntax right for function pointers. Versus .sort() and .find() in any object language, or the array-style dictionaries in Smalltalk or Lua. No way is C more expressive, much less ‘simple.’
“Simpler Code, Simpler Types”
Okay, that I sort of agree with. I was bouncing through the JavaFX API docs yesterday, and there sure were a lot of bizarro return types that don’t make sense off hand. However, there are entire books written about how to figure out what a damn variable definition means in C. const * char is a pointer to const… I think. Right? Very simple.
“Speed King”
Okay.
“Faster Build-Run-Debug Cycles”
In what universe? I grant you, C++ takes longer to compile than straight C, normally because it has to compile anything having to do with templates all the way back up the line. But my Java compiles are every bit as fast as C compiles. Maybe faster, since Java often does a better job of figuring out what needs to recompile, vs. ‘hmm, maybe I should ‘make clean’ just to be sure.’ And the Build-Run-Debug cycle of ANY scripting language is about a hundred times faster than C. gcc –help takes longer than no compile step at all.
“Ubiquitous Debuggers and Useful Crash Dumps”
Again, I can’t argue with that one. Well, C crash dumps aren’t always useful, but they’re there when you really need them.
“Callable from Anywhere”
Okay. Although I’m not sure there’s any real advantage over C++ in this regard.
“Yes. It has Flaws”
Indeed. Of course, every language does. But some of the crap you put up with in C is really just awful. Strings may be a simple, dumb example, but damn it just about every program has to deal with strings constantly. It may be a small grain of sand, but it chafes after a while. And sure, I can get an internationalization library for C. (Or ten of them, and who knows which one is actually the good one.) But in Java it’s just there.
A final thought. Would Couchbase have been written at all if it hadn’t started in Erlang? Sure, when it got big and successful and they needed speed improvements they re-coded in C. But could they have gotten that far without a higher level language to start with? The whole idea of “rapid prototyping” isn’t just good because it makes programmers’ lives easier, but also because get up and running with something functional quickly. You have to prove a business case or convince people to join your team, and that point is a lot harder to get to in C.
C is much simpler to understand fully than most other languages, in the sense that you understand exactly what a given line of code does. You pretty much know what (unoptimized) assembly will be generated by reading C. Try that with python, java, or even C++.
Of course C is harder to understand when you start with the language, but that’s not what the author is talking about. He’s talking about very experienced programmers (which he is).
In the universe where java is not a “comparable language” I suppose, but I admit I wonder about this one too.
On my world such type of developers should have no problems understanding languages more complex than C.
No. Not when there’s a VM or an interpreter involved, not to the point of knowing what assembly (not just bytecode !) gets generated. Even with a language deeply rooted in C like C++, there are so many parts of it that keeping them all in your mind while you code casually is not possible.
Again, we’re not talking about just understanding the language you code in, but understanding what will actually get executed on the CPU.
I know my chosen languages very well, better than I know C. But I still have a better understanding of what C does to my machine than I do with any other language.
This is a flawed premise. You are positing a hypothetical question and drawing conclusions without any data to show for it. In other words, what you’re doing here is blind speculation.
“const *char” isn’t a valid C construct. I think you meant ‘const char *’ which is the traditional form to write the more proper ‘char const *’. That is a “character constant”-pointer, i.e. a pointer to a character-constant. Contrast with ‘char * const’ which is a “character”-pointer constant. This is one of the more luxurious features in C, having the ability to keep a close eye on object mutability. How do I pass a constant array in, say, Java? I can’t, there’s no way to do it.
The rest of your comment, I have no problem with. C is not a one-size-fits-all solution.
‘Collections.unmodifiableList()’ will sometimes answer the need. But you are absolutely right; C has a lot more control.
Fair enough. I stand by the basic premise, however. Using function pointers to pass a compare function to a sort function is more complicated than blah.sort(). Particularly if your collection has a natural sort order, like strings.
Ha! Okay, it was 4 in the morning when I wrote that, but the fact that I screwed up my example kinda proves the point.
The point of passing a compare function to a sort function is for when you want to sort things into a different order than the natural one.
Except in C, there is no way to sort anything without passing a compare function to qsort. Even an array of ints needs a hand-coded, if trivial, compare function passed to it.
Actually, the fact that qsort takes a comparator function means that it is much better than a simple sort() method on an array object. It allows you much more freedom in comparing compound and complex objects, not just primitive values. How, for instance, do you sort an array of structures/objects sensibly? The language doesn’t know how, so you have to provide a comparator. This is one of the less problematic features of the standard C library – providing generic interfaces for everything. If you need your ultra-fast ultra-optimized sorter, you can always code it yourself.
In Java, your class extends Comparable, then provides a definition of comparesTo(). You can now use your class with anything that wants to sort or search. You could argue the difference is just semantics or syntactic sugar. Still, the concept and syntax of function pointers is considerably more difficult to understand, and results in code that isn’t as checkable by the compiler and can cause really wacky things to happen if you mess it up. In C, if I accidentally pass my string compare function to qsort while I’m trying to sort an array of ints, the program will compile and run, and then screw up badly at runtime.
That’s not so much a problem with using function pointers as it is with weak typing. qsort has to be able to sort any types, so the arguments passed to the sorting function are void*, making the function signature similarly unhelpful.
However, if you write a thin wrapper on top of qsort that takes in a function pointer with more specific argument types, then the compiler will complain reliably.
That sounds obvious now that you mention it, but in over 20 years of C programming that never occurred to me, nor have I ever seen anybody do it. Of course, it adds that much more complexity, but it still sounds like a damn good idea.
I find I have to write wrappers around a lot of library functions in any language. Maybe except for Python, with it’s “batteries included” principle.
The C++ STL has a boatload of generic algorithms that provide typed sorting that qsort lacks.
http://www.cplusplus.com/reference/algorithm/
Templates keeps all the complexity in compile time.
I think you just consider it simpler because it’s what you’re used to. For instance, it would take me a good while to figure out what you said about comparesTo(), simply because I’m not used to it. Also, your example covers custom sorting of objects, but not of first class types – how do I do a custom sort on strings, or ints, or something else atomic? Sure, you could wrap the primitive types in custom object classes and incur a significant performance penalty (an array of 10000 int’s is going to need 10000 malloc’s and subsequent free’s), in addition to adding tons of lines of code (plus a few new extra classes/files), whereas in C the problem could have been solved efficiently in about half a dozen lines of code with no extra allocation needed.
Always a danger, but I think it’s more likely the way my mind works than experience. I learned C and qsort a good decade before picking up Java and C++. In general, the implementations are quite similar. In C, you write a compare function, calling it whatever you want. In Java, you write a compare function and make sure you call it compareTo. Anyway, you have a point.
I’ve never needed to do that, so I’ll have to think about it. You may be right that wrapping the type in a new class may be the only way to do it. Of course, in Java ints would have to get wrapped in an object anyway, so that’s a sunk cost. While it is a niche case, it’s pretty interesting.
C# also has faster build-run-debug cycles.
And as for debuggers… Visual Studio blows C out of water – pause any time, IntelliSense to go back in steps, check variables, call stacks. Aaaand Paralel Tasks windows debugging threaded application. Unless gdb improved on debugging more than one thread.
C is a language, Visual Studio is an IDE. You’re comparing two different things.
Correct me if I’m wrong, but VS supports all of the above on C++ (among other things), which is almost entirely a superset of C, i.e. it supports them on C as well. You’re arguing against gdb, not C. And as for gdb, except for “Intellisense to go back in steps” (isn’t Intellisense a code completion feature?), all that you mention can be done in gdb as well. Perhaps you mean some specific feature and/or problem?
He probably meant IntelliTrace, which is the historical debugging tool bundled with Visual Studio Ultimate. IntelliTrace is implemented by using IL metadata, and thus it does not work on native C++ code.
Parallel Tasks/Stacks was designed primarily to debug task-parallel applications — though you can also use it on threads. gdb would need an extension for whatever task-parallel library you’re using. The Visual Studio debugger has it because the compiler, runtime, and IDE are all bundled together in one package — and so the debugger can be tightly-coupled to the bundled libraries.
It’s not so much C# vs C++, as the “everything bundled together” model vs. the individual-tools model. If you select components individually, then you’d just install a whole bunch of addons to get what you need in the IDE — Eclipse being a primary example.
Edited 2013-01-11 19:02 UTC
Hmm… I don’t see your point. Prototyping in a higher-level language doesn’t take away the usefulness of C. Judging by this thread I’m hardly the only one combining C and Python in my daily work, and yes I pretty much always write new code in Python first and then translate performance hotspots to C.
That said I’m certain Couchbase could have been written directly in C without prototyping in a higher level language, I think the initial thought was to have it running in Erlang but the performance was found lacking which prompted a rewrite in C.
The Linux kernel devs wrote git straight off in C during a very fast development phase and it’s been a huge success in no small part due to it’s much-lauded performance, but also it’s stability.
Unlike the other common programming languages of our days, C is the only one expressive enough to do everything you want to do with the machine.
For some it is a strength, for some it is a weakness, but to use C well you need to know how CPUs execute code.
If you know that, aside from syntactical quirks, using C is easy and results in small and fast programs, even in userspace.
Nowadays it is fashionable to criticize C and to say it is not good for userspace applications, but a fact remains and it is that most of the applications which make the computing world run, they are in C. Pure C.
Despite some people thinking otherwise, OOP oriented programming is much more confusing for newbies, and newbies should start learning to programming in assembly. Yes, in assembly. The only way they will understand what the language is doing when they go on to higher level languages. Theory means nothing without practice and understanding. Once you know how assembly languages work and the memory model of machines pointers are just natural.
Substandard programmers can flock to languages like C# or Java. Not that I find Java, the language, wrong in itself, it’s just that it is often used by programmers who do not understand what’s happening and that want to be cool by saying stuff like “it’s secure, it’s run by a VM, it’s sandbox, it doesn’t have pointers!!”, well it is does have references. Which, internally, are just like pointers! It’s hidden to the eye, but they are still there. Proof is the argument passing to methods in Java, if I pass a reference to an object in an argument, and if I change the value of the reference inside the function at a later moment, the value of the reference I passed to the function externally doesn’t change. Just like C. It’s still passing argument by copying. Most other things are just things the Java Compiler enforces for you and the JVM sets up automatically for you (like garbage collection, interfaces, etc.).
You can compile C for a virtual machine, and it will be as “safe” as Java run in a virtual machine, and also the MMU already tries to “isolate” user code from system code in order to make things safer. Most times it is sufficient, at times not.
Not type checking adds to the expressiveness of C, it doesn’t make it “unsafe” if you know what you are doing. That you can copy things elsewhere without being stopped is a very useful side feature, especially in system code.
Also, C strings are not necessarily too tied to the language. You could modify a C compiler in order to output Pascal-style strings for string constants, and you could make custom string functions that handled Pascal strings if you really wanted to..
Edited 2013-01-11 09:06 UTC
You can, you can: what matters is how C is used currently not how it could be used.
And C isn’t used *at all* like how you’re suggesting it could be.
You, sir, are an idiot! Please remove the fingers from the keyboard immediately, and never write a line of code or text again!
Really?!
And I though Ada, Modula-2, Modula-3 and Turbo/Free Pascal, Oberon(-2) already provided everything C does and more, just to name a few comparable languages.
Security exploits everywhere, since not everyone is a top developer.
Then you throw C performance out of the window, because all C libraries assume null terminated strings and you end up converting between string types all the time.
Edited 2013-01-11 10:30 UTC
Since when is C performance depending on null-terminated string ? God, have you already programmed in C ?
Whenever i’m looking for performance, i never handle strings, but blocks of bytes.
Strings are for “higher-layers”, such as sending a file name as a parameter. This kind of usage has zero impact on critical loop performance.
The performance lost is the string conversion.
As long as you keep using the same type of strings, there is no issue.
1992 – 2001
I am a system software developer and I have been using C for for about a decade now. The reason why I got into C was simple: Unix is developed in C and this is what you tend to use for Unix system software development.
Some people look down on C because it’s old and uncool, but just because something is old does not mean it’s crap. Many people avoid C because it’s not an object oriented programming language, however I think you can use object oriented programming paradigm with C, as long as you are willing to fully understand how it works at the low level. With a bit of effort and discipline, I have been able to use plain C to achieve single inheritance, polymorphism and generics. It takes a bit longer to develop your code compared to Java or C++, but I would argue it makes you a better programmer when you understand the low level implementation.
So despite its age, C is a very flexible and powerful programming language, although I would prefer for Ada to be more ubiquitous.
Exactly my experience as well. But when I break it to the modern wannabe-hipster script kiddies here on OSNews, I get modded down “C doesn’t do feature X the cool way, so it’s stupid” is the premise here.
It’s fine understanding how the OO features work on a low level. I’m not sure I’d say people *need* to know this to be decent programmers, but I’m personally fond of my background in C and assembly.
But, after learning how to do things at a low level, why would you continue writing code in that tedious and painful way, when there’s tools better suited for the job?
Because it’s not tedious and painful, the overhead of manually doing basic OO development in C is not very big. It would be tedious and painful to use C++ instead, because I don’t have as much experience with C++.
Hm, fair enough point wrt. the experience (you’re never too old to learn something new, though! ) – but I’d still say the manual work is tedious… otherwise C++ would probably not have been invented? (Keep in mind that the early versions translated to C – you’re doing a compiler’s job).
I personally also enjoy the higher type safety and a bunch of C++ features (RAII!) and some libc++ things. Even if not doing OOP as such, I wouldn’t be using C, I’d be using C++ as a Super-C (and I’d even argue that it’s not necessarily a bad idea for kernel mode work, though you definitely would be using a restricted feature set of C++ there).
Personal preference and experience are some pretty big factors. I’ve played around with quite a lot of features by now (nowhere near proficient in all of them!). Most of them are in the comfort zone of C-like (javascript, python, LUA, C# and Scala do offer some alternative ideas though, and even a bit of FP mindset).
But C++, for some reason, has a special place for me. Probably because (especially with C++11!) it lets me program in a lot of different styles, while still offering kick-ass performance and being relatively safe – and RAII really is nice compared to garbage collection.
Not saying it’s the end-all-be-all of languages, I believe in the right tool for the right job… I just have a hard time seeing why you’d use plain C, when you could do procedural programming with C++ 🙂
It has nothing to do with being old to learn something new, but more with spare time. There is too much information out there, too many different ways of skinning a cat. I personally don’t think C++ is worth the effort, I just don’t like the way it’s designed. I’d be more inclined to put the time and effort into Ada, I think Ada is far superior to C++, but that’s just my personal opinion.
I can see why C++ and Java appeal to so many people, but there are some people like me who prefer simpler things in life. As an example, I still own an old monochrome Nokia mobile phone, it does the job, it never crashes and I would never trade it for an iPhone or any other gizmo of that sort.
C is unreasonable, and effective.
To me, someone who claims C is superior to any other language, is someone that has an unstructured mind when it comes to software design.
C allows you to just code and not worry about anything but the hacker-friendly part. Contrary to what appeals to programmers, that is a bad thing.
When you’re coding in C, your software still has a design, even if the language does not expose it as explicitly. How many C programs around are actually pure OO design in disguise with self-implemented vtable? It’s not because you don’t use C++ that your program does not have “exceptions” (as in error handling logic). Etc.
Software design is hard. It’s harder than just coding. How many programmers claim to know OO yet truely understand its core foundations? What about the Liskov substitution principle (just to name one).
C is absolutely appalling at dealing with modern software design problems. For example take concurrency (and I don’t mean just spawning a couple of threads). Read the rationale behind Boost.Thread, C++11 and lambdas, or watch Herb Sutter videos about concurrency in C# and C++ to get a taste of the complexity of the problems.
C vs C++, C vs C#, C vs Java, C vs OO, etc is an old debate that is slowly dying as a newer generation of programmers is replacing the old guard. And funnily enough, the quality of software has massively improved over the years.
Edited 2013-01-11 14:38 UTC
And yet, C is the most popular language and has been on the rise for a few years (even displaced Java as the most popular language a few years back):
http://www.tiobe.com/content/paperinfo/tpci/index.html
So, what gives?
Demand for C has increased a few years ago due to the need for developers in the embedded market.
Otherwise I would take this link with a pinch of salt: Visual Basic 6 is almost more popular than C# and on its way to surpass it.
I think that a more interesting metric would be the languages used daily on our computers and on large applications.
Edited 2013-01-11 17:42 UTC
So what you are saying is that C isn’t a good fit for *your* problem domain. No problem there, there’s no silver bullet as languages go, and everybody picks their favorite language also according to personal taste.
If you are allowed to formulate arbitrary questions, you can get arbitrary answers. That doesn’t mean that they apply in the real world, though.
As for me, personally, I don’t care if C is the least or most used computer language in the world. I still use the right tool for the job, be it C or something else.
Careful there, saso! That sounds absolutely reasonable, which is not allowed on teh Interwebs. 🙂
Visual Basic 6 just won’t die. Trust me I wish it would.
So, are you claiming that is other programming language where the future OSes are being written? Something that the new guys can use instead of the old-good-for-nothing C?
Interesting!
I can never understand posts like this. I doubt carpenters sit around and argue about whether screwdrivers or hammers are better. I’m a writer and I don’t argue about whether pens are better than pencils. In the kitchen I don’t argue about whether knives are better than scissors, or whether a spoon is better than a fork.
Use the right tool for the job. OK, so maybe for your jobs C is often a better tool than Java. That doesn’t mean “C is better” or “C is more effective.” For my jobs Haskell is a better tool than C. That doesn’t mean “C has broken my heart” or “C focuses on the wrong things.” Goodness, we can have different tools that do different things; just because a tool isn’t suited to a task–or to the tasks you do most often–doesn’t mean that tool sucks.
You should have more up-mod’ing for this but since I already commented I can’t.
Btw, sporks beats both spoons and forks
Edited 2013-01-12 03:06 UTC
You can use ObjC/C/Asm for low level and build everything high-level with other languages. Ocaml is an example, ruby is another example and Go is yet another example. You can have a stack that does not use C++. But high-level languages are useful for large scale programming. Unfortunately there is no standard IMHO that allows to write a library in Pascal/Ocaml/C/Asm or Guile and re-use it by the others. This is unfortunate. For example it does not make sense to waste man months writing an xml library in C (it can be done optionally) when you can write it in Go. It makes no sense to write a proof system in C when you can do it in Ocaml and so on. Scripting languages are also another essential ingredient.
I believe that the main problem is the hardware and not the complexity that fostered C++. Fix the hardware in order to simplify the low level and use high level languages to build the stack. I could use C to interface to PCI and build a TCP/IP stack in Go in order to make it available to other consumers or even C. The main problem is that companies want to maintain the profit margin in HW and make bad hardware. Why should I ever need a driver for the printer if I could write it in user space with Ocaml and some C libraries via TCP/IP or a Wireless interface.
The complexity grows, the tools are there but the companies don’t get it. Bad hardware costs manhours and makes life tougher.
Edited 2013-01-11 15:30 UTC
Are you suggesting we build hardware that interprets high-level code directly? Do you realize that this makes CPU design nearly impossibly complex? Even the handful of Java processors and VLIW processors that have been built still execute relatively low-level “bytecode” instructions that are nowhere near abstract language complexity levels.
Perhaps the problem is that you may have a very poor (or lacking) understanding of computer architecture.
The AS/400 mainframes use bytecode (ILE) as program format with the JIT integrated into the kernel.
The code gets JITted and cached in the executable, either on the first use or on demand.
Additionally the code might be regenerated if required.
This was kind of the route Microsoft was trying with everything .NET on Vista.
Now lets see where WinRT (COM ABI) goes.
C is nice, and has worn remarkably well, but if I were making the choice between those two for a new project, C++ would normally be the winner.
This may be sacrilege to some, but considered strictly as a technical book, I think Stroustrup’s “The C++ Programming Language” is better than K&R’s “The C Programming Language” as well.
Edited 2013-01-11 17:38 UTC
Sacrilege! Burn the witch!!!!
Let’s open the free buffet :
http://www.hxa.name/minilight/#comparison
Kochise
I love C and Python…. and I love the fact that they’re fairly simple to intermix.
Porting C to Python and Python to C is pretty straight forward (as long as you’re not doing OO programming in Python obviously).
To me… I see Python as C but with a nicer syntax and just the right amount of built in data structures.
I love being able to create a throw-away list, dictionary, or other structure right inline a statement.
See if the first two letters of a string match one of several other strings?
In Python:
if foo[:2] in [‘XX’, ‘YY’, ‘AB’, ‘!!’]:
print ‘woo hoo’
In C:
if ( strncmp(foo, “XX”, 2) == 0 ||
strncmp(foo, “YY”, 2) == 0 ||
strncmp(foo, “AB”, 2) == 0 ||
strncmp(foo, “!!”, 2) == 0 ){
printf(“woo hoo\n”);
}
… its the little things.
Is the Python going to perform faster?… hell no, but I love the readability. And if it needs to be ported to C for performance, it is straight forward.
I sympathize. I have found that prototyping stuff quasi interactively in python and then porting it over to C/C++ can be a very productive way of programming.
Edited 2013-01-11 22:23 UTC
Exactly.
Previous generations might have prototyped on paper or a chalk board before “finalizing” the C code.
Thats from a time where it made sense to plan ahead because you might be using punch cards or had to wait your turn for computer time.
Python makes a great prototyping language because it has been described as executable pseudo-code. Not too low or high level.
When I prototype in Python I’m doing the same thing, I’m just using a text editor rather than pad of paper.
I wonder if this gentlemen has tried Go? If he is a big C fan, Go might be a great choice for the stuff that doesn’t need to be written in C. Great qoute: “Go is like a better C, from the guys that didn’t bring you C++†— Ikai Lan. Not only is Go a safer version of C with lots of great modern features, it integrates very well with C via cGo.
If memory usage and speed are your preeminent concerns, C is certainly a force to be reckoned with: http://benchmarksgame.alioth.debian.org/u64q/which-programs-are-bes…
I never tried Go, but I would give a try to D instead of Go.
I can’t imagine why, unless you are doing embedded development.
Could you please elaborate on your comment?
D is a systems programming language (like C or C++) but with a very high level of abstraction.
http://en.wikipedia.org/wiki/D_(programming_language)
Edited 2013-01-12 04:43 UTC
http://en.wikipedia.org/wiki/Golang
http://tour.golang.org/#1
“Go is like a better C, from the guys that didn’t bring you C++†— Ikai Lan
Ken Thompson, Rob Pike & Robert Griesemer set out at Google to create a modern C. Go is an extremely simple language compared a multi-paradigm language like D. For some people this means worse, for many it means better. For me being simple, yet providing all the tools needed is perfect. IMHO sufficient simplicity is elegance, and leads to maintainable code.
If you Google D vs Go you will find many threads, take a look the discussion; It is much broader than we could hope to cover here. (Keep in mind most results are 3 years out of date)
If I had to make you a short list I would say I like Go because:
* For me personally I have never had a more productive language experience than Go. (I have mostly used C,C++,C#,Java,VB/.net with some Javascript,Perl,PHP,Haskell,Prolog)
* Go makes is easy to get concurrency right on the first pass, and has extremely powerful concurrency features built-in. In Go there is no writing a single threaded version and having come back and multithread it later.
* The entire language spec is small enough to read in one sitting and understand and know by heart in less than one month.
* The Go compiler is extremely fast, faster than C or D
* Go performs very well: http://benchmarksgame.alioth.debian.org/u64q/which-programs-are-bes…
* Language formatting is standardized (gofmt). No more newline flame wars for example.
* While D appeared in 2001 and Go in 2009 D and Go are now neck and neck on stack overflow and github. On OSnews, hacker news, slashdot etc you see many more Go stories. While this does not in and of itself make Go better, if this means Go has a stronger faster growing community, that is positive for developers that choose Go.
I’ll stop here and for the sake of brevity. In short D is a better C++ and Go is a better C that believes C++ was a wrong turn.
I just can’t buy into the notion of Go as a ‘modern C’. There are similarities in syntax, not surprising of course when considering who the authors are, but the languages are targeting different areas.
I do like Go, it’s a language I am currently dabbling with and I put it somewhere between python and C. I think it has a great future, largely due to the built-in concurrency features you mentioned since the future seems to hold an ever increasing number of cores per cpu.
But in the domains where C dominate, characteristics like high performance, deterministic memory handling, small memory footprint, no runtime overhead, low-level constructs for efficient data manipulation, ease of use from other languages, etc are what makes it so popular.
Go really doesn’t apply here, it will never have the same performance due to being a ‘safe’ language and also due to having automatic garbage collecting which has logic using up cycles even when it’s not actively reclaiming memory. It does not have a comparable small memory footprint as it includes the runtime with each binary, it lacks constructs like unions, it lacks a simple bitfield type, in short it is not a replacement for C.
And when I’m talking about domains where C is dominant, I mean things like system-level code, audio/video encoding/decoding, archivers/compressors, low level frameworks, image/sound manipulation etc.
Currently Go is making it’s largest splash in web development, part if this is due to some of it’s characteristics, part of it likely is that the web developer space is quite prone to experiment with new languages.
However in the longer perspective I can see Go have a ‘go’ at replacing C++,Java,C# for application level development.
Desktop applications like for example Libre Office, Inkscape, Gimp, etc could easily have their C++ parts replaced by Go while keeping the underlying C performance critical components (like GEGL for Gimp) where it’s necessary.
Yes, but it will get slower once more optimizations are implemented. Still they chose to write their own Go compiler from scratch due to finding the LLVM and GCC backends much too slow so there’s good reason to believe it will remain quite fast.
Hmmm.. are you are cherry-picking results or am I missing something? Here is the straight-up list:
http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?test=all…
Go is quite a bit slower than Java 7 and much slower than C (which is the fastest overall). Still there is a slew of performance improvements in store for Go 1.1 from what I’ve read so it will be interesting seeing where it lands once that is released. Certainly the performance isn’t bad considering it’s a 1.03 release of a very young language, but again calling it a modern C is something I find to be nonsense.
Cherry-picking D as a comparison is rather pointless I think, it does not have the resources that Go enjoys through Google, also it’s development history is so unlike that of Go, with community fragmentation over runtimes and standard libraries etc.
Just a small remark, check Native Oberon and Blue Bottle OS.
Desktop operating systems used at ETHZ in Zurich, implemented in GC enabled system languages.
http://www.ocp.inf.ethz.ch/wiki/Documentation/WindowManager
Only the boot loader, and hardware bindings are fully implemented in Assembly, with the remaining parts in Oberon or Active Oberon.
Fully working desktop systems, used for operating system research and teaching.
The main reason mainstream OS are still not using GC enabled system languages is inertia, but this is already changing at least in Windows with the C++/CX extensions.
What are you trying to prove by the existance of such OS implementations? I’m sure someone has a research OS written in a interpreted language aswell.
Show me benchmarks where these systems are evaluated under pressure and compared with native unmanaged equivalents like Linux/BSD/NT.
If something had came along that used garbage collection and memory safety and magically managed to perform as well as native unmanaged code then obviously we’d all be using it by now.
No, it is because we are not ready to give up performance at the system level, the amount of optimization at this level is extreme, we are talking about the use of specific compiler extensions to manually handle such low level aspects as branch prediction and cache prefetching/clearing. You don’t want garbage collection pauses in this setting.
But if you have any benchmarks which supports the notion that the reason we are not seeing garbage collected safe language based operating systems used in mainstream computing is that of inertia rather than performance, please show me.
Again, I’d love it if there was some magic silver bullet that allowed us to have full memory safety while having the same performance as in unmanaged native code. In reality it’s a tradeoff, and in areas such as kernel level code where low latency is absolutely crucial, performance trumps convenience.
Native code related bugs gets squashed, it’s performance remains.
These systems are also native, managed was a term coined by Microsoft to differentiate VM code from pure native code.
I just have user experience, no benchmarks.
They are good enough to use as desktop system, with text editing software, video and sound processing, Internet usage, for example.
Of course, some work is needed to really be able to write something like Crysis on those systems, or make them into server systems.
Not even with Assembly you can control the internal processor optimizations, unless you’re talking about simple architectures
System programming languages with GC, also allow you to disable the GC in critical sections and make use of manual memory management in unsafe/system code sections.
This how C++/CX works for example.
You get to use the classical manual memory management from C and C++ libraries, the reference counting wrappers from STL and the WinRT references with compiler support.
There was a talk on last BUILD, where Herb Sutter mentioned the Windows code is being slowly migrated from C to C++, where they might take advantage of such facilities.
Before I forget, in CS reference counting is also a form of GC.
Heh, I should have checked up the name of the language, Native Oberon is a quite telling name
Which really doesn’t tell me anything, it’s an unmeasurable anecdote.
But let me reiterate, should there be a garbage collected memory safe operating system with the same or just about the same performance as that of native, manually memory managed operating systems then the industry would pounce on it in a heartbeat since these are all great features… until you factor in the performance cost, which is why Linux/BSD/NT/Darwin haven’t been replaced.
There really isn’t any conspiracy going on, there’s simply no ‘oh, this is good enough performance’ when we are talking about operating systems / kernels. In userspace, sure, but not in such core functionality which in turn has such a profound impact on the overall performance when put under pressure like in high performing servers, computing clusters, super computers etc. Areas where something like even a 3% performance difference isn’t brushed off as irrelevant.
Heck I’ve seen hardcore gamers curse for less of a performance drop than that
Yes, pretty much all garbage collected languages allows for some sort of unsafe operation/mode, but entering and leaving critical sections comes with a performance cost and you are then adding complexity + possible bugs you tried to avoid by using a safe garbage collected language in the first place.
You are still left with less performance than native code and you may have introduced memory bugs, can’t say I personally find that solution very attractive.
I don’t always use garbage collecting, but when I do, I use full garbage collecting.
The issue is much broader than that, that is why I said inertia on my initial thread post.
System programming languages success are bound to the success of a specific operating system. There is yet to exist a system programming language that becomes successful without being sold by an OS vendor.
So, this means on the current scenario there would need to exist an operating system vendor that decides to promote such languages alongside an operating system coded in such language.
The UNIX family of operating systems of course will never move out of C, it does not make sense to do so.
Microsoft is only doing baby steps. First with Singularity, then with the still secret Midori that no one outside Microsoft Research really knows what it is about.
On Windows 8, the WinRT subsystem is based on COM, which makes use of reference counting with C++/CX as native language to develop such components.
As mentioned, at BUILD there was a comment about Windows code being migrated from C to C++, but no details were given if it is pure C++ or C++/CX.
In theory, Apple could throw away all the C and C++ code from MacOS X and just use Objective-C everywhere. This way developers could use the C subset in areas where performace cannot be ignored at all, while using the Objective-C ARC in code sections where it makes sense to do so.
Most companies prefer to evolve software, rather than rewrite it.
So do you see Microsoft or Apple creating from scratch an operating system in a GC system programming language just because it is cool to do so?
Of course not, because at the end that does not sell computers regardless how fast the system might be. So the progress is very slow and will take many versions until we get there and might even take a few more decades, who knows.
But we are moving into that direction, although in baby steps. Objective-C got a pure GC, which had a few issues and now ARC, which seems to be better.
With Windows 8, C++ got reference counting extensions with C++/CX. If they should have used *_ptr<>()is another discussion.
The remaining question is why doesn’t another vendor do it?
Well, developing operating systems is a lot of effort and now with free(gratis) operating systems available (Linux and BSD family) no single company is going to invest money on that to go against any of the big guys that sell operating systems with an established base.
So we keep using what works and most important sells computers, hence inertia.
But that doesn’t preclude the existance of verifiable benchmarks showing the performance compared to these established operating systems.
If someone were to present such benchmarks for their research operating system then the industry would take great note. I have yet to come across any such benchmarks, can you point me to any?
Obviously these research OS’s are being continously benchmarked by their creators against established operating systems, and if they had something impressive to report performance-wise in those comparisons I’m certain they would be shouting it from the rooftops, research grants and all that.
Singularity saw no commercial potential and was passed off to academia, and ‘then with the still secret Midori that no one outside Microsoft Research really knows what it is about’ means absolutely nothing in the context of this discussion, it could be anything.
Microsoft’s migration from C to C++ has been going on for quite some time as far as I know, hardly surprising as C is in a state of limbo in their toolchains. That doesn’t mean they are shifting to being garbage collected.
There’s a difference between using what works and where you actually put your continued efforts. I’ve seen no indication that efforts are being routed away from the traditional kernel/operating system towards some new shiny safe memory based solution.
Simply because when push comes to shove, they will be slower than the current solutions (unless they can somehow perform magic), and the current solutions are working very well.
We’ll just have to agree to disagree, future will tell either way but seriously I’ve been hearing (the same?) people making your kind of claims for the past 10 years or so, those baby steps must be really small indeed.
Here are some benchmarks of the current state of affairs on Sigularity in case you don’t know it:
http://esl.epitech.eu/~arnaud/@lsd/l/016%20An%20Overview~*~…
Performance discussion starts on Page 30.
For A2 (Blue Bottle) I cannot find a paper now.
Sure, I am betting that this will eventually happen. Lets see what the future brings.
C might have a limited domain now, but you are forgetting when it was written is was very much a general purpose language. In the same way Go is.
Go might not have these qualities compared to C, but it does have many of these qualities compared to modern applications programming languages. Outside of writing a OS, Go’s problem domain is a super-set of C as used in modern times.
I think you don’t understand what makes Go fast to compile: http://golang.org/doc/go_faq.html#What_is_the_purpose_of_the_projec…
Furthermore, Go tip (1.1 in dev) has many many more optimizations than Go 1.0.3 and the compiler is 30% faster yet….
You are missing something.
1. Your link does not include Go in the comparison.
2. If you do include it, you will see only very fast languages like C/C++/Java beat Go out (and Go is improving) Go beats most languages on the list (including C# Mono).
Most importantly:
3. Your link is only considering execution time, my link is a composite of execution time, memory usage and code size (weighted the same).
The is important because although Go is a little slower than Java (which is actually one of the fastest languages, BUT Java uses MUCH more memory (hence its reputation). You can play with the weights of my first link to reflect your own priorities regarding exec time vs memory usage vs code size and make your own call.
Again, I mean modern C as in C the general purpose language of the 1970/80s, not C is the systems programing language of today. I’m not suggesting Go replaces what C is used for primaily nowadays (in some cases). The primary use cases of C has changed, I mean “modern C” in regards to C’s original more general purpose nature.
…I was responding to someone who was comparing Go and D… I don’t think D is particularly relevant to most developers.
Sounds great, but it most likely means that the original compiler had lots of untapped speed improvements prior to this upcoming version. Implementing optimizations to be applied during code generation will slow the compiling down.
Oh, and both the Go compiler and the Go runtime are written in, you guessed it, C
Yes it does, it’s two steps behind Java 7.
That is how you measure language performance. Memory usage and code size are other metrics.
I actually don’t think that matters much when it comes to the areas where Go and Java are likely to be deployed (which are unlikely memory constrained areas), obviously it’s not a bad trait to use less memory though.
Still I think Go has every chance of eventually beating Java in raw performance, currently Java has what is probably the best in class garbage collector, Go’s garbage collector is (as of 1.03 atleast) likely far behind.
Also in overall compiler optimizations the Go compiler sometimes loses out heavily to Gccgo on the exact same code, indicating that there is still alot of room for improvements to be made.
Well if you had framed it as such then I would have had no problem with your claim, although I would still find it odd to compare Go with C’s much more widespread usage in the 70/80’s as opposed to the areas it mainly occupies today.
Ah, my bad, sorry.
Even though it didn’t appear so initially I think we agree more than we disagree. Perhaps with different emphasis, but:
I disagree with you here. Performance is multidimensional and those three factors are the primary factors.
Look at something like car performance, it is a combination attributes like maximum speed, acceleration, breaking, handling etc. Again, multidimensional.
Also, you are correct, the Go tip/1.1 garbage collector is much better.
“We did not want to be writing in C++ forever” -Rob Pike
This goes back to the Bell labs guys feeling that C++ took C in the wrong direction. Go is Ken, Rob and Robert’s attempt at C like language that they feel improves on C as a general purpose language, going a different direction than C++ took (a path mostly followed by Java/C++).
http://commandcenter.blogspot.co.il/2012/06/less-is-exponentially-m…
I think so too, like I said, I like Go
Sure, but if you omit the words memory usage or code size and simply say ‘language performance’ it will most likely be the generated code performance that is being referring to as that is the most common metric being compared, particularly in benchmarks.
Great, haven’t built tip since before 1.03 so I’m in for a nice surprise by the sound of it. Go 1.1 still slated for Q1 I hope?
Well, like I said I think Go has a good chance at taking on C++/Java/C# in the application space, both on the end user desktop and enterprise.
From my as of yet meager time with the language I think that the built-in concurrency primitives (goroutines and channels) are likely the best features when it comes to ‘selling’ the language.
Again given how ‘more cores!’ seem to be the cpu manufacturers battlecry these days, a language like Go which makes it easier to make use of an increasing number of cores without exposing the programmer to increasing complexity has a very bright future in my opinon.
Nope, I was hoping it would be that soon as well.
Here is a burn down graph, my understanding is when the blue line hits bottom 1.1 will be ready for release: http://swtch.com/~rsc/go11.html#
In the meantime you can always grab a snapshot of Go tip. A couple of the go core devs I talked to feel that for performance intensive uses this is the way to go. You can use the google issue tracker to make sure you don’t care about any issues in a particular snapshot (many are feature enchantments, etc)
Probably give it a shot again, last time I tried it (pre 1.03) there were some problems with some cgo bindings I used.
Still it’s not as if I can’t wait for the 1.1 release, as I said I’m not doing any production code in Go so performance isn’t really an issue. Just trying to grok the language on my spare time, sofar so good
and yet C is indeed stuck in the past, with C99 being the latest version i know of.
I guess the culprit is C++, which is supposed to be, well, C plus other things. So probably a lot of people will answer just that : newest versions of C are “embedded” into C++ evolution.
I feel pity for C. That basically means there is no more any evolution to wait for C, all this due to a “name grabbing effort” by the C++ team. For people who want the predictability of C, there has to be another route than C++.
C is low-level, idiosyncratic, difficult to maintain, and error-prone… and I love it! Always have, always will. But I’d be the first to admit that for most IT applications it’s a poor choice.
For IT admin tasks I often use Perl or shell programming. I admire Python lots though I don’t much use it myself. Never really learned Java and its 18 million classes. Oh well…
You don’t have to learn the classes that is why there is a language reference 😉
I repeat, I really do not want to miss OO when going multi-threaded.
Greetings,
pica
Why is a trabant better than a ferrari?
Simple!! If doesn’t allow you to drive with 300km/h against a tree. With the “top speed” of a trabant being around 70-100 km/h, you still have chances to survive if you made an accdient. And it costs you a lot less to repair it.
Guys really, with c, there is a lot of stuff you simple can’t do, and which would force you to write a lot more code instead of automatic code generation.
And would say about D programming language?
But after some three and half decades of programming – much of it with C and C syntax languages – C (and most every language based on it) PISSES ME OFF. Needlessly cryptic, pointlessly convoluted, and seemingly intentionally designed to make 100% certain you are going to make coding mistakes, I would rather hand assemble 8k of Z80 machine language, than deal with trying to find a bug in 100 lines of C code.
The ONLY reason I put up with it is that generally speaking it’s what you are forced into using by compiler availability, support, and what’s expected of you in the workplace. It’s easy to blame the lemmings at the rear and front — since most of us writing software are stuck in the middle and can’t see where you’re going and can’t stop for fear of getting trampled.
I often think C and every language based on its syntax from C++ to Java to PHP, exists for the sole purpose of making programming hard. They are certainly a far cry from the elegance of languages like Pascal or the simplicity of assembly… To be frank, I thought there were two core reasons for higher level languages — portability – which is a joke when you’re still that close to the hardware, and being simpler than machine language – which it most certainly is NOT! It gets far worse when you look at objects in most any C derivative language since they seem to be just shoehorned in any old way!
Even sadder are all these ‘newer’ languages that are even more needlessly cryptic and difficult to decipher like Python, Ruby, or lord help you Rust… Rust, the language for people who think C is a bit to clean and verbose — which is akin to saying the Puritans who went to Boston in the 17th century did so because the CoE was a little too warm and fuzzy for their tastes… or founding your own extremist terrorist group because Hezbollah was a bit too warm and fuzzy. There’s this noodle-doodle idiotic concept right now that typing a bunch of symbols and abbreviations most people could never remember in a hundred years is ‘simpler’ than using whole words — and the quality of code has gone down the toilet thanks to it… Such idiocy explaining why people will piss away bandwidth and code clarity on halfwit rubbish frameworks like jQuery.
It’s enough to make you think the old joke… isn’t a joke.
http://www-users.cs.york.ac.uk/~susan/joke/c.htm
Edited 2013-01-12 23:40 UTC
I don’t know about the other two, but Python is really easy to decipher. The mandatory indents make it really easy.
I should like Python — I really should given what a stickler I am for clear consistent formatting…
But to be brutally frank, it’s more cryptic than C in it’s logic structures. Take the example up above by funkyelf — both the C and the Python versions make me want to punch someone in the face due to their lack of clarity.
But again, I worship at the throne of all things Wirth so…
I LIKE the forced formatting of Python — I DISLIKE the unclear control structures and lack of verbose ending elements… and the needlessly short/cryptic methodology and naming conventions. By the time you get into iterators and generators, it’s a needlessly convoluted mess that honestly, I have a hard time making any sense out of.
I dunno, maybe this dog is getting too old for new tricks — but I cry for anyone trying to use python to learn with — which is part of why I don’t get why the Pi folks and many educators have such a raging chodo for it. It’s the LAST thing I’d consider using to teach people to program… It’s another of those languages so complex IMHO you’d be better off just sucking it up and coding machine language directly. I really don’t get these high level languages that make assembly look simple.
Edited 2013-01-13 09:37 UTC
With iterators and generators, you have to understand that it’s almost a different “paradigm”. I think the root of your problem with Python may more be the fact that it’s not a purely imperative language? I’m currently biting the Common Lisp bullet, and the Lisps have the same kind of ability.
Like C++, most programs don’t need advanced Python features like iterators or generators anyway, but once you get used to a more declarative style of programming, it becomes a lot easier. That usually involves writing a few Python list comprehensions.
The thing that makes Python a good teaching language is that the basics of programming in Python is a lot easier to understand than C and its descendants. Yes, there are complicated advanced things, but in terms of the basic stuff, Python is easier to teach.
I too think the productivity aspect of C is overrated by the author.
I think what the author fails to consider is when your programming language of choice is “fast enough”. Also, too many times, I have seen algorithms to solve problems using brute force approaches isntead of choosing smarter algorithms. I don’t care if you write machine code, if you’re using a dumb algorithm, your program’s runtime efficiency will pay for it. I think that C’s reputation as a “high-level assembly” actually makes people less likely to truly think about algorithm design, and instead rely on tricks of the compiler. In other words, C’s reputation as a fast language is actually detrimental to good algorithm design.
Many people have been addressing C’s lack of OO as a good thing, but I will say that C’s lack of functional style programming is a bad thing. Having had to torture my brain to grok functional programming languages, I think that people just don’t think easily in it. Recursion tends to hurt people’s brains, and generally, people come up with iterative solutions instead. Also, the immutability of variables throws people for a loop (no pun intended from above). Unfortunately, C is horrible from a functional perspective. Start writing functions in a map-reduce style, using first class functions for filtering, having the ability to generate functions dynamically, or using to closures to encapsulate data….and you will truly miss it when you have to write things in an imperative style.
But I miss using Fortran even more.