The C programming language defines two standard memory management functions: malloc() and free(). C programmers frequently use those functions to allocate buffers at run time to pass data between functions. In many situations, however, you cannot predetermine the actual sizes required for the buffers, which may cause several fundamental problems for constructing complex C programs. This article advocates a self-managing, abstract data buffer. It outlines a pseudo-C implementation of the abstract buffer and details the advantages of adopting this mechanism.
that many times you want a pointer but do not know of what type?
well, that would be why we have templates in C++ 😉
never mind…I read a bit more and this guy wants a bit more than even Templates can offer. he wants something like a runtime dynamic general function where from any moment, then interface can change a little bit.
that is more dynamic than a template since templates require compile time binding of the types.
Who do people keep coding in C? Is it about speed?
We all know that processing power is cheap today while manhour is expensive. Debugging is ultra-expensive in terms of time and money.
It’s been 2 years since I’ve switched to C# and I cannot think of coming back to C/C++ (which I used to love before that). My programs are cleaner, more readable, type safe, I don’t have to worry about managing memory, calling conventions, linking problems, etc.
Who do people keep coding in C?
One big reason is interoperability with existing applications. Why doesn’t Mozilla or wget have BitTorrent support yet. Perhaps because adding support would be a messy prospect at best, given BitTorrent’s only fully functional implementation is in Python?
Is it about speed?
And overall resource consumption… one application running within a runtime is tolerable, but what happens when you have dozens of them running concurrently? Suddenly your computer feels about a half decade older.
We all know that processing power is cheap today while manhour is expensive. Debugging is ultra-expensive in terms of time and money.
Yes, however while you’ll have only one development team, you will most likely have multiple users, and each of them will need a system capable of running the application you’ve developed.
There’s also the issue of portability… with C/C++ and Qt I can write an application portable between Windows, MacOS X, and various *IX platforms with X11. With C# you are basically bound to a single platform for any GUI application you create (Until the Windows.Forms implementation in Mono is production ready anyway) Java zealots may tout its portability, but try configuring a FreeNet node on, say, OpenBSD (or even Solaris). The start/stop scripts make terrible Linux-centric assumptions, like that /bin/sh is bash.
It’s been 2 years since I’ve switched to C# and I cannot think of coming back to C/C++ (which I used to love before that). My programs are cleaner, more readable, type safe
If you couldn’t manage clean, readable code with type safety in C++ (hello, templates?) there’s something seriously wrong with the way you’re programming.
I don’t have to worry about managing memory
Garbage collection is a leaky abstraction, and it’s certainly still possible to leak memory within a garbage collected application. Programming correctly in a garbage collected environment, especially with large applications, requires knowledge of the operating paradigms of the particular garbage collector implementation of the runtime environment you are using. The difference is that there are excellent memory debugger tools available for free now for native code applications, most notably valgrind, but if you are experiencing memory leaks in a garbage collected environment, finding them is considerably more difficult.
Why do people insist on having these weird ways of managing buffers. Instead of using a link list of grown buffers he could have used something like
struct {
char *buf;
int size;
} to indicate the size + contents of it. doing a simple buf=realloc(buf…) would do everything he attempts to achive, wouldn’t it?
C# isn’t available on all platforms. C is available on most platforms. Some platforms still only have assembly available to them.
C/C++ is the only way to go for larger scale embedded systems. Additionally, garbage collected languages can have problems if a garbage collection operation pauses a real time program.
Having written a program in python and ported it to C++, I understand why these languages are hard to use. My code grew from 500 lines to 2500 lines and became less neat (due to a lack of garbage collection).
The other point to make about C is that yes, it is about speed. I know that for large scale research operations, physics simulations and such, it is simply not worth the (minor) performance hit to go to a language like C# or Java. Additionally, low level, OS code is better as fast as possible, and as compact as possible, which is why C++ is not even often used for code like that (mostly C).
I hope that made enough sense.
There is still a place for C. It’s just not for applications written for the desktop that involve much user interaction.
> If you couldn’t manage clean, readable code with type safety
> in C++ (hello, templates?) there’s something seriously wrong
> with the way you’re programming.
Ok. Then tell me what percentage of C++ programmers are able to READ and MAINTAIN template-heavy code like boost, ATL/WTL, STL and similar.
I actually liked STL very much, I found it very easy to use, yet most people I know, couldn’t read even simplest example in C++. At the same time they had no problems reading code in higher-level languages.
BTW. I Lied. Last year I wrote a small frontend for CRM application witn C++/ATL/WTL for PocketPC. But right the first day after the version 1.0 was done we’ve made the decision not to touch C++ anymore and we moved to .NET Compact Framework.
The project codebase was shrunk from 10000 lines down to 2000 and we managed to re-implement the original features found in 1.0 and more. It took us 4 weeks to re-implement in C# and we never had a single problem since then. Original project took 3 month and more than 70% was spent on debugging.
Having written a program in python and ported it to C++, I understand why these languages are hard to use. My code grew from 500 lines to 2500 lines and became less neat (due to a lack of garbage collection).
Huh? I can understand that the program grew a bit, but C++ – like Python – has available libraries that handle almost everything under the sun. Networking, graphics, etc. I am also not sure what you mean about your code looking less neat when ported. C++, like Python, can be clean and small with a bit of thought. Were you using the standard c++ libraries?
Us old guys like C. I can think of a few reasons.
1/ No other option (embedded systems)
2/ If you have to code something new in a particular time frame, it would probably be best to go with a language you already know instead of stumbling on a newer language that you don’t know. If that language is C then so be it.
3/ You have to interface with libraries that don’t have bindings to another language you’ve chosen.
4/ You have to code for another platform that the other languages have not been ported to properly. C++ can possibly fail here still. Rare though. “.NET” options are still pretty much Windows only at the moment.
5/ Legacy stuff is in C and the effort and money to convert it is probably not worth it. So maintenance or extensions will end up staying in C.
I still code in C. For the most part it’s “Go with the beast you know.” I’m comfortable in it and for speed of coding I can’t be faster in anything else at the moment. I’ve played with other languages such as Java and C++ but there are parts of these languages that I simply do not like (many more keywords, syntax, behaviour, etc.) ‘C’ is simple, you can do anything in it; even coding in an object oriented manner; so moving over to C++/Java/C# or whatever isn’t a big draw for me, I don’t really need it. What you need to know with C is much smaller than all the newer languages.
C will still stick around for years to come in my opinion. Or at least I hope so, I still prefer it over anything else.
As for programs being cleaner, readable etc. That’s entirely up to the programmer. I know some in my company that write poorly readable code while others write pristeen and they are both coding in C. Memory Management I find isn’t a big issue; if you keep it simple then you don’t have problems. We’ve never had big problems with linking or calling conventions or type safety.
Robbert
Realloc may require the copy of the buffer as it takes up more space. Their scheme will just allocate a new chunk without this happening.
However I would agree somewhat with the comment about weird data organization. Personally I would have just created a simple data type api like a dynamic array or void *; similar to STL vector class so pretty generic and all that they really need; and use that instead. It’s clearer, cleaner and random accessable. I still have the realloc problem as above but the array of void *’s is much smaller and so the copying may be tollerable; and there are ways to boost improve performance in this area which STL vector does as well.
Robbert
This and probably many more schemes that tried to address similar problems with the memory allocation needs standardization or they are just x or y framework. They need to be part of most of the C libraries if not all so that the programs are portable.
Yup. People could do what we’re doing at work. Make an C API layer on the STL classes to get the data types you need. Works well. Not as flexible as using STL form C++ but to fast track getting standard data types that would be the way to go. Pretty simple to do as well. Debugging is a little interesting though as you can’t see what’s inside the C++ data pointers in C but anyway, let’s just call that ‘Information Hiding’ which Uni. Prof.s like to think is a good thing.
Robbert
While this is no magic solution, its most certainly not a bad idea. It DOES differ from any standard implementation.
A cheesy example I think of is an implementation of “sprintf”, where the size of the resultant buffer is unknown until the actual processing is done. This dynamic buffer would be a reasonable choice as no extra copying is needed and it takes care of buffer overruns.
Naturally there are cost concerns about parsing the resultant data structure. It would be muhc more complex/costly in time. Another feature about this implementation is if you use a contant block size, you can get all these blocks from a pool and thus have less memory fragmentation and fewer system calls.
Yamin
Speed, Control, Flexibility and Versatility.
C code is small, compact and efficient with system resources. Ever tried running 4 Java apps on your system. Don’t make me cringe. C code is fast, period.
C is has a clean, an eloquent and elegant syntax. It has an impressive list of useful libraries. And it affords the programmer flexibility, control and power as how he/she wishes to accomplish a task.
C is versatile. I can write drivers, graphic libraries, web browsers, GUI apps, embedded systems, operating systems you name it in C. You can’t say the same for other languages.
Finally, the C compilers are one of the most mature, most stable, most researched and most optimized compilers as compared to most languages, especially the new ones. You’d be hard pressed to find a system that you can’t write C code for. It’s versatility impresses.
We all have our reasons for using which ever language we use. What pisses me off is when people think their language of choice is the best there is because of a flimsy needless feature that makes them lazier.
You become a master programmer when you understand the strengths and weaknesses of your language of choice thereby exploit its benefits while being constantly cautious of its drawbacks. You don’t become a master programmer by looking for the holy grail of programming languages. Because there is none yet.
Use whatever language you wish, just don’t claim it to be the best, or try to shove it down our throats. That’s rather childish, immature and baseless. After all, writing good code has little to do with langauge and everything to do with design.
“Who do people keep coding in C? Is it about speed?”
Because PCs are a very small percentage of application runners. Most of the world’s softwares are running inside a restricted embedded system. I work as a Gameboy developer. Let me tell you that C# if definitelly not what we want for developement. Not even C++/OOP as flat C is the only acceptable trade-off between full hardware control and high-level/sanity-keeper language.
I am glad, that a lot of other people think the same about C (that is one of the best languages around).
When I look at the .NET/Java hype, I am really afraid, that C (or even C++) will become an embedded systems/kernel programming language only. Moreover at my university (in Vienna), most courses are in Java, and I fear that when these students finish their education and begin to work, that they will try to push Java as a top language, since they are used to it.
Why *I* like C especially? As noted before by somebody else: It’s so versatile. I can write _anything_ with it, and even if I want to crash my system, I can do it, and no “I won’t give you this programming language feature, because it might be too dangerous for you”-paradigm of some “modern” languages.
I do mostly Java programming at work as this is what the company wants me to work with. I’ve also worked with Perl, PHP and a fair ammount of C++. C is the fourth language I’ve learned while in my first year of college (after child experience with Sinclair BASIC, Z80 assembler, and Pascal) and I’ll tell you this: I feel the most confortable with C and assembler. Yes, I can do many things with Java and C++ but C is what I “feel”.
Funny thing is that I see a C comeback as there is a growing trend towards embeded systems where C and assembler rule the day any day.
Any “managed” language or code is going to limit your freedom as a programmer and as an innovator if you want.
I see only one way Java (or it’s successor languages) will be able to prevail: make hardware for it and fix the leaky abstractions.
I cut my teeth on C. It was the first language I ever learned, and I learned it well. I have written hundreds of small programs, dozens of medium programs and several large programs in C. I have designed and implemented an operating system in C. I have worked on two other operating system kernels, one (large) hobby OS, and one a multi-million line state-of-the-art commercial UNIX. I’ve implemented a TCP/IP networking stack and software router in C. I’ve worked on a production-quality Java virtual machine topping 500,000 lines of C and C++ with supporting libraries over 500,000 lines of code. I have written assemblers and multiple interpreters in C. I have dug through the implementation of libc on a modern, commercial UNIX and re-implemented many pieces of it.
I can tell you things about the language that you think you know, but you don’t. I can point to language (mis)features that are probably abused in most C programs that exist, and in many library implementations.
A feeling has been growing me over the years as I actually branched out and started to program big systems in other languages. I’ve written hundreds of thousands of lines of C code in my day, and I have written hundreds of thousands of Java code, and thousands of lines of C++ and thousands of lines of ML. And yet, with every line of code that I write that is not C, I thank GOD ABOVE that I don’t have to deal with:
– Macro hell.
– Header hell.
– Pointer arithmetic.
– Memory leaks.
– Undefined semantics.
– Compiler specific extensions.
– Compiler bugs.
– Compiler inefficiencies.
– Incompatible headers.
– Code laced with #ifdefs.
– Sizes of primitives differing on architectures.
– Struct alignment.
– Union hell.
– The comma operator.
– Huge switch statements.
– Cases missing breaks.
– Cases that fall through.
– Manual memory allocation.
– Unsafe, unchecked casts.
– ./configure
– Code that confuses “smart” editing tools.
– Linking.
– Library hunts.
– Insufficient man pages.
– Weak types.
– Lack of powerful code analysis tools.
– Function pointers.
And the list just goes on.
I could rant for hours, but I guess I should stop. I’ll just be shrugged off and dismissed anyway…
First off, this is absolutely not a flame. I program in Fortran (an app for a chemistry professor) and I have the same problem: how do you allocate memory dynamically for an unknown number of objects. Now the article (by my quick scan) basically impliments a linked-list structure containing blocks of memory. Fine, I originally thought (and may latter come back to) this method in Fortran (easy enough to do in modern Fortran). However, Fortran has another facility which is a bit easier to use for this problem: direct-access files.
A direct-access file is sometimes known as a binary file. To write a direct-access file you have to specify a record length when you open the file and a record number when you write the file. The record can be whatever you wish (hint: arbitrary data type) as long as it is less than the record length. This can be checked by defining your data in a derived type (like a C struct or the data members of a C++ class) and using the intrinsic function inquire() to find the size of the data, thus ensuring you only ask for a record length the correct size as the data to store.
Now, we have this direct-access file open. To write to it, I store my data in a derived type and write this to a record. All of this is in a loop, so that the data is stored as it is produced and I keep count of how many records I write.
To read back the data, I allocate an array of my derived type with as many elements as I counted being written and then I read the data from the direct access file into the array, with each record being one element of the array.
Wham-bam-easy as pie. Of course, the direct-access file is an i/o operation, which is potentially slow, but I’m betting on the fact that conventional platforms allocate enough memory to i/o buffers that small read/writes are essentially memory copies, not hitting disk. There is a chance to hit disk, but I think its slim in most cases.
Anywho, just a method that may be useable in other languages, but is definitely a nice feature in Fortran.
so is perl…what is your point?
so is perl…what is your point?
When you are done writing your graphics driver in perl, wake me up.
And yet, with every line of code that I write that is not C, I thank GOD ABOVE that I don’t have to deal with:
(list of C-isms).
This will make me sound like an NRA nut, but here goes: Languages don’t make stupid code, people make stupid code.
“Who do people keep coding in C? Is it about speed?”
Well, one of the reasons is that most Operating Systems are written in C. Think Linux, the BSD’s, and I believe OSX and Windows as well.
Having written a program in python and ported it to C++, I understand why these languages are hard to use. My code grew from 500 lines to 2500 lines and became less neat (due to a lack of garbage collection).
Huh? I can understand that the program grew a bit, but C++ – like Python – has available libraries that handle almost everything under the sun. Networking, graphics, etc. I am also not sure what you mean about your code looking less neat when ported. C++, like Python, can be clean and small with a bit of thought. Were you using the standard c++ libraries?
I should have specified that it was a hand coded parser. This parser when originally implemented in python used a static virtual function to create an instance of itself. C++ does not allow for such a construction. Additionally simplification on the fly was done by passing a result through from a recursive call and never instantiating the class. Because of C++ not being able to make static virtual member functions, a function within the class that is virtual cannot be called without instantiation, so if the class is instantiated and the only reference is wiped out by returning a recursively created reference, a memory leak occurs. Garbage collection solves this problem.
Mostly though the problem exists in porting something from a garbage collected language to a non garbage collecting language and maintaining style between them.
I hope that was clear. My C++ code was readable, though it had in many places work-arounds dealing with C++.
I did not screw up in doing this. It was fine C++. It was simply much more complicated in the python equivalent, and in the end, it did not leak memory.
No standard library was used in doing this.
Well, all your text sound to me like someone trying to convince me that dropping my car in order to take the city bus so I can thank the Gods Aboves. You got good points :
– Don’t have to stay awake
– Don’t need to know the road panel meanings
– Can read a book
– Don’t need to have good reflexes
– Don’t need to fix your car once in a while
– etc
All valid points. But is that enough for me to completely drop my car and move using bus only ?
Hell no.