Linked by Thom Holwerda on Thu 6th Sep 2012 21:32 UTC, submitted by MOS6510
Benchmarks "During the 4th Semester of my studies I wrote a small 3d spaceship deathmatch shooter with the D-Programming language. It was created within 3 Months time and allows multiple players to play deathmatch over local area network. All of the code was written with a garbage collector in mind and made wide usage of the D standard library phobos. After the project was finished I noticed how much time is spend every frame for garbage collection, so I decided to create a version of the game which does not use a GC, to improve performance."
Thread beginning with comment 534183
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: yes... but why?
by dorin.lazar on Fri 7th Sep 2012 04:35 UTC in reply to "yes... but why?"
dorin.lazar
Member since:
2006-12-15

Easier or not, the point is: memory management will be in the hands of the programmer. Why should we want that? 10% performance increase?


No, we're talking here about a competent that was able to think about the time the garbage collection eats from his run times.

The programmer that takes garbage collection for granted usually gives much worse times. Think about 100% or 200%. Why? Because someone that doesn't keep in mind the metal under the framework (s)he uses will never be able to understand what a good trade-off is.

I prefer outsource memory management to a bot and use the expensive programmer's time to do something else.


Well, outsourcing will not make you stand out for long. And the quality of a programmer lowers in time as he is used to do it the easy way. In the end, there's a reverse effect of this 'doing the easy way' - in which with the tools the programmer knows (because the lower level will be inaccessible) he will have to do fine-grained work. It's like picking needles with a boxing glove; the framework being a boxing glove, it looks nice and shiny.

But keep in mind. 80% of any project is doing the last 20%. And the last 20% IS picking needles. Or you can deliver 80% of your product, and hope that you'll fool your client next time too. That's what most businesses do today in the western world, so I guess you may be Ok.

Reply Parent Score: 1

RE[2]: yes... but why?
by lucas_maximus on Fri 7th Sep 2012 08:58 in reply to "RE: yes... but why?"
lucas_maximus Member since:
2009-08-18

Way to miss the point.

The whole point is and he is right, is that extra performance really worth the extra development time?

If it isn't then it isn't worth it, it is a trade off that should be considered part of the development process.

The other ridiculous argument about someone losing some skills because they did stuff the easy way is ridiculous.

Most of the code I work with is pretty much WTF, because somebody wanted to do it the "clever" way. The lost productivity due to this is massive when making minor modifications.

Reply Parent Score: 3

RE[3]: yes... but why?
by dorin.lazar on Fri 7th Sep 2012 09:55 in reply to "RE[2]: yes... but why?"
dorin.lazar Member since:
2006-12-15

Way to miss the point.

The whole point is and he is right, is that extra performance really worth the extra development time?

If it isn't then it isn't worth it, it is a trade off that should be considered part of the development process.


There's always the question if the extra performance is really worth the time. That's for everyone to decide, their business, not mine. But sometimes it's too late to optimize.

The other ridiculous argument about someone losing some skills because they did stuff the easy way is ridiculous.


A lot of people in software development forget about the psychological impact of the work that developers do. You might think that there's no impact, but please, think about it one more time.

Give a senior developer menial tasks, and you shall have a mediocre experienced developer. Remember, a developer is as good as the job he does, not as the time he invested into the job.

Most of the code I work with is pretty much WTF, because somebody wanted to do it the "clever" way. The lost productivity due to this is massive when making minor modifications.


No, I'm not talking about 'the clever way' to hell. I'm talking about skipping some of the essentials. People that never done a delete nor ever thought about memory consumption will have a hard time when such limits comes into place. I recently had a wtf moment, when someone was caching an entire table in memory, then copying the cache as a backup. His verdict was "sometimes it crashes, don't know why". There's a 'too stupid' too, not only 'too clever'.

Reply Parent Score: 1

RE[2]: yes... but why?
by JAlexoid on Fri 7th Sep 2012 14:44 in reply to "RE: yes... but why?"
JAlexoid Member since:
2009-05-19

Well, outsourcing will not make you stand out for long. And the quality of a programmer lowers in time as he is used to do it the easy way. In the end, there's a reverse effect of this 'doing the easy way' - in which with the tools the programmer knows (because the lower level will be inaccessible) he will have to do fine-grained work. It's like picking needles with a boxing glove; the framework being a boxing glove, it looks nice and shiny.


The best programmers are lazy programmers(the essence of DRY principle). Outsourcing memory management has always been the trend. First it was manual, then bare metal, then OS'es managed it and now runtimes manage it.

And needles today are mostly in the form of business requirements, not getting that extra 1ms. But then again, you can actually squeeze those 1ms out of almost any system.

Reply Parent Score: 3