Linked by Thom Holwerda on Tue 12th Oct 2010 21:52 UTC
Java "Oracle and IBM today announced that the companies will collaborate to allow developers and customers to build and innovate based on existing Java investments and the OpenJDK reference implementation. Specifically, the companies will collaborate in the OpenJDK community to develop the leading open source Java environment."
Thread beginning with comment 445113
To view parent comment, click here.
To read all comments associated with this story, please click here.
gilboa
Member since:
2005-07-06

... Than again, I must wonder, what's the use?
If you cannot use:
- operators (kernel code that cannot be debugged by reading it is a big no-no in my view).
- virtual functions (as above, plus, they may hide a major performance hit due to unknown vtables depth/size).
- You are forced to write your own per-class init and close functions (lack of constructor and destructor error code).
- You are forced to re-implement new (over, say kmalloc, with some type of fancy error code management).

In the end, you're only left with objects and partial inheritance - or in-short, you more-or-less get an automated "this" pointer.

Call me crazy, but IMO, it's far easier to implement objects in pure C. (As it's already being done, quite successfully, in the Linux kernel)

P.S. Which kernel are you using?

- Gilboa

Edited 2010-10-14 22:42 UTC

Reply Parent Score: 2

Neolander Member since:
2010-03-08

... Than again, I must wonder, what's the use?

Makes code a lot easier to read and work on for various reasons, enforces separation of the various components.

If you cannot use:- operators (kernel code that cannot be debugged by reading it is a big no-no in my view).

I use operator overloading heavily myself, since I implemented a cout-ish class for debug output (much easier to implement than printf, and more comfortable when it comes to using it IMO).

You can still debug with usual techniques like moving a breakpoint forward and see where things go wrong. If the code crashes after the instruction using the operator, you know who's faulty...

- virtual functions (as above, plus, they may hide a major performance hit due to unknown vtables depth/size).- You are forced to write your own per-class init and close functions (lack of constructor and destructor error code).

In a kernel, if the initialization of a basic component failed, I think it's time to panic() anyway. But if for some reason you really need an initialization/closing function to return an error code, you can just go the C way and use an "errno" global variable.

- You are forced to re-implement new (over, say kmalloc, with some type of fancy error code management).

Well, new is only as complicated as you want it to be. If you want it to be an automatically-sized malloc, you can just write it this way. You're certainly *not* forced to reimplement new each time you write a new (duh) class.
In the end, you're only left with objects and partial inheritance - or in-short, you more-or-less get an automated "this" pointer.

Again, it's essentially a matter of high readability. In my experience, C code gets messy and unreadable a bit too easily, while with C++ it's easier to keep statements short and easy to read.
Call me crazy, but IMO, it's far easier to implement objects in pure C. (As it's already being done, quite successfully, in the Linux kernel)

Every one has the right to have its own opinion ;)
In my opinion, code readability is more important than ease of implementation. And my experience with Linux code is that it doesn't exactly take readability too far.

P.S. Which kernel are you using?

Couldn't get satisfied with the existing stuff (noticeably because I'm allergic to posix, which limits the range of possibilities quite a lot), so I rolled my own.

Edited 2010-10-15 05:56 UTC

Reply Parent Score: 2

gilboa Member since:
2005-07-06

I use operator overloading heavily myself, since I implemented a cout-ish class for debug output (much easier to implement than printf, and more comfortable when it comes to using it IMO).

You can still debug with usual techniques like moving a breakpoint forward and see where things go wrong. If the code crashes after the instruction using the operator, you know who's faulty...


While it is possible to remotely debug the Linux kernel or the Windows kernel, doing so will produce weird, if not unusable results due to timing and scheduling changes.
In short, in most cases I either debug by eye or add a couple of log messages.
Now, as you said below, you've rolled you own kernel (nice!) which negates the "I need to debug a code I didn't write" problem which makes operator a -huge- no-no in most cases.
Try finding bugs in STL without using a debugger and you'll understand what I mean...

In a kernel, if the initialization of a basic component failed, I think it's time to panic() anyway. But if for some reason you really need an initialization/closing function to return an error code, you can just go the C way and use an "errno" global variable.


As long as you're alone and you're not running any type of 3/5-9's application inside the kernel.
Lets assume that I monitor traffic on 10 different network cards and the 10'th NIC is flaky, If I panic, I lose all the traffic on the other 9 NIC's, plus, nobody will know why I failed.
A far more sensible approach will be to continue working, disable the flaky NIC, and send a log message to the admin.
Plus, if the machine has RAS features, the admin will be able to replace the NIC without rebooting the machine or restarting my system.

Like in the debug option above, everything depends on what you're doing and inside which kernel. A production system cannot simply panic every time some thing minor (or even major) happens.

Well, new is only as complicated as you want it to be. If you want it to be an automatically-sized malloc, you can just write it this way. You're certainly *not* forced to reimplement new each time you write a new (duh) class.


What I meant was: that unless you're writing your own kernel (as you do), in any C-based kernel you'll have to re-implement new on-top of the native allocation function(s), but by doing so, you'll lose a lot of features (E.g. In the Linux kernel you send fairly important flags that will require some fancy foot work to emulate under CPP).

Again, it's essentially a matter of high readability. In my experience, C code gets messy and unreadable a bit too easily, while with C++ it's easier to keep statements short and easy to read.


I usually say the same... about CPP ;)
IMHO good C implementation is far more readable than CPP code due to the cleaner code/data layer separation. (Let alone the "What you see is what you get" effect that cannot be produced under CPP due to operators and virtual functions).

Every one has the right to have its own opinion ;)
In my opinion, code readability is more important than ease of implementation. And my experience with Linux code is that it doesn't exactly take readability too far.


Actually, I find Linux easier (as messy as it is) to read than Windows (DDK) or *BSD. But as you said, it's a matter of opinion.

Couldn't get satisfied with the existing stuff (noticeably because I'm allergic to posix, which limits the range of possibilities quite a lot), so I rolled my own.


As I said above, nice!
OSS project or past time / proprietary?

- Gilboa

Edited 2010-10-15 06:38 UTC

Reply Parent Score: 2