Linked by Thom Holwerda on Tue 12th Oct 2010 21:52 UTC
Java "Oracle and IBM today announced that the companies will collaborate to allow developers and customers to build and innovate based on existing Java investments and the OpenJDK reference implementation. Specifically, the companies will collaborate in the OpenJDK community to develop the leading open source Java environment."
Thread beginning with comment 445147
To view parent comment, click here.
To read all comments associated with this story, please click here.
Neolander
Member since:
2010-03-08

... Than again, I must wonder, what's the use?

Makes code a lot easier to read and work on for various reasons, enforces separation of the various components.

If you cannot use:- operators (kernel code that cannot be debugged by reading it is a big no-no in my view).

I use operator overloading heavily myself, since I implemented a cout-ish class for debug output (much easier to implement than printf, and more comfortable when it comes to using it IMO).

You can still debug with usual techniques like moving a breakpoint forward and see where things go wrong. If the code crashes after the instruction using the operator, you know who's faulty...

- virtual functions (as above, plus, they may hide a major performance hit due to unknown vtables depth/size).- You are forced to write your own per-class init and close functions (lack of constructor and destructor error code).

In a kernel, if the initialization of a basic component failed, I think it's time to panic() anyway. But if for some reason you really need an initialization/closing function to return an error code, you can just go the C way and use an "errno" global variable.

- You are forced to re-implement new (over, say kmalloc, with some type of fancy error code management).

Well, new is only as complicated as you want it to be. If you want it to be an automatically-sized malloc, you can just write it this way. You're certainly *not* forced to reimplement new each time you write a new (duh) class.
In the end, you're only left with objects and partial inheritance - or in-short, you more-or-less get an automated "this" pointer.

Again, it's essentially a matter of high readability. In my experience, C code gets messy and unreadable a bit too easily, while with C++ it's easier to keep statements short and easy to read.
Call me crazy, but IMO, it's far easier to implement objects in pure C. (As it's already being done, quite successfully, in the Linux kernel)

Every one has the right to have its own opinion ;)
In my opinion, code readability is more important than ease of implementation. And my experience with Linux code is that it doesn't exactly take readability too far.

P.S. Which kernel are you using?

Couldn't get satisfied with the existing stuff (noticeably because I'm allergic to posix, which limits the range of possibilities quite a lot), so I rolled my own.

Edited 2010-10-15 05:56 UTC

Reply Parent Score: 2

gilboa Member since:
2005-07-06

I use operator overloading heavily myself, since I implemented a cout-ish class for debug output (much easier to implement than printf, and more comfortable when it comes to using it IMO).

You can still debug with usual techniques like moving a breakpoint forward and see where things go wrong. If the code crashes after the instruction using the operator, you know who's faulty...


While it is possible to remotely debug the Linux kernel or the Windows kernel, doing so will produce weird, if not unusable results due to timing and scheduling changes.
In short, in most cases I either debug by eye or add a couple of log messages.
Now, as you said below, you've rolled you own kernel (nice!) which negates the "I need to debug a code I didn't write" problem which makes operator a -huge- no-no in most cases.
Try finding bugs in STL without using a debugger and you'll understand what I mean...

In a kernel, if the initialization of a basic component failed, I think it's time to panic() anyway. But if for some reason you really need an initialization/closing function to return an error code, you can just go the C way and use an "errno" global variable.


As long as you're alone and you're not running any type of 3/5-9's application inside the kernel.
Lets assume that I monitor traffic on 10 different network cards and the 10'th NIC is flaky, If I panic, I lose all the traffic on the other 9 NIC's, plus, nobody will know why I failed.
A far more sensible approach will be to continue working, disable the flaky NIC, and send a log message to the admin.
Plus, if the machine has RAS features, the admin will be able to replace the NIC without rebooting the machine or restarting my system.

Like in the debug option above, everything depends on what you're doing and inside which kernel. A production system cannot simply panic every time some thing minor (or even major) happens.

Well, new is only as complicated as you want it to be. If you want it to be an automatically-sized malloc, you can just write it this way. You're certainly *not* forced to reimplement new each time you write a new (duh) class.


What I meant was: that unless you're writing your own kernel (as you do), in any C-based kernel you'll have to re-implement new on-top of the native allocation function(s), but by doing so, you'll lose a lot of features (E.g. In the Linux kernel you send fairly important flags that will require some fancy foot work to emulate under CPP).

Again, it's essentially a matter of high readability. In my experience, C code gets messy and unreadable a bit too easily, while with C++ it's easier to keep statements short and easy to read.


I usually say the same... about CPP ;)
IMHO good C implementation is far more readable than CPP code due to the cleaner code/data layer separation. (Let alone the "What you see is what you get" effect that cannot be produced under CPP due to operators and virtual functions).

Every one has the right to have its own opinion ;)
In my opinion, code readability is more important than ease of implementation. And my experience with Linux code is that it doesn't exactly take readability too far.


Actually, I find Linux easier (as messy as it is) to read than Windows (DDK) or *BSD. But as you said, it's a matter of opinion.

Couldn't get satisfied with the existing stuff (noticeably because I'm allergic to posix, which limits the range of possibilities quite a lot), so I rolled my own.


As I said above, nice!
OSS project or past time / proprietary?

- Gilboa

Edited 2010-10-15 06:38 UTC

Reply Parent Score: 2

Neolander Member since:
2010-03-08

While it is possible to remotely debug the Linux kernel or the Windows kernel, doing so will produce weird, if not unusable results due to timing and scheduling changes.
In short, in most cases I either debug by eye or add a couple of log messages.

In my case, when debugging things, I like to use a combination of breakpoints and log messages. Also, for kernel code debugging, I just love emulators like Bochs and QEMU. I couldn't imagine kernel work without those. Having the ability to instantly check code changes on a freshly compiled kernel using a command line is just a priceless ability.

Now, as you said below, you've rolled you own kernel (nice!) which negates the "I need to debug a code I didn't write" problem which makes operator a -huge- no-no in most cases.
Try finding bugs in STL without using a debugger and you'll understand what I mean...

Hey, that's cheating ! ;) I can't live without my breakpoints. If I don't have some at hand, I just hard-code them using a while(1) ^^ (And for relatively high-level code like the STL, you can also use more subtle and powerful debugging tools like Valgrind for memory leaks)

As long as you're alone and you're not running any type of 3/5-9's application inside the kernel.
Lets assume that I monitor traffic on 10 different network cards and the 10'th NIC is flaky, If I panic, I lose all the traffic on the other 9 NIC's, plus, nobody will know why I failed.
A far more sensible approach will be to continue working, disable the flaky NIC, and send a log message to the admin.
Plus, if the machine has RAS features, the admin will be able to replace the NIC without rebooting the machine or restarting my system.

Like in the debug option above, everything depends on what you're doing and inside which kernel. A production system cannot simply panic every time some thing minor (or even major) happens.

Indeed, panic is obviously not, by any mean, a universal solution to every single problem. I just recommended it in case initialization of a vital part of the kernel failed, because at this stage there's obviously nothing else to do than displaying an error message and dying.

At later boot stages, however, the situation is quite different. It's not the kernel that can't do its job, it's a process, or a module in the case of a monolithic kernel (in this case the NIC driver). So that process/module can die/be unloaded alone and leave the rest the system in peace. Just leave a message on the admin's desk, as you said, after all attempts of self-healing have failed ;)

What I meant was: that unless you're writing your own kernel (as you do), in any C-based kernel you'll have to re-implement new on-top of the native allocation function(s), but by doing so, you'll lose a lot of features (E.g. In the Linux kernel you send fairly important flags that will require some fancy foot work to emulate under CPP).

That's more of a problem with using C and C++ together than a problem with C++ itself, I think

Couldn't get satisfied with the existing stuff (noticeably because I'm allergic to posix, which limits the range of possibilities quite a lot), so I rolled my own.

As I said above, nice!
OSS project or past time / proprietary?

OSS project on my spare time. I rode Tanenbaum's wonderful "Modern Operating System", and it gave me the will to scratch some itches I have with current desktop operating systems, even at a very low level. It's an experiment in design and implementation of a desktop operating system from the ground up. I don't know how far it will go but I just feel the need to do it, "just to know".

Edited 2010-10-15 17:01 UTC

Reply Parent Score: 2