Linked by Thom Holwerda on Sun 20th Apr 2008 15:43 UTC
General Development Geek.com is running an opinion piece on the extensive reliance of programmers today on languages like Java and .NET. The author lambastes the performance penalties that are associated with running code inside virtualised environments, like Java's and .NET's. "It increases the compute burden on the CPU because in order to do something that should only require 1 million instructions (note that on modern CPUs 1 million instructions executes in about one two-thousandths (1/2000) of a second) now takes 200 million instructions. Literally. And while 200 million instructions can execute in about 1/10th of a second, it is still that much slower." The author poses an interesting challenge at the end of his piece - a challenge most OSNews readers will have already taken on. Note: Please note that many OSNews items now have a "read more" where the article in question is discussed in more detail.
Order by: Score:
Comment by primelight@live.com
by primelight@live.com on Sun 20th Apr 2008 16:17 UTC
primelight@live.com
Member since:
2008-03-19

"...but let's face it, they do a lot less too."

Duh. That was his point.

Reply Score: 3

RE: Comment by primelight@live.com
by bryanv on Mon 21st Apr 2008 16:06 UTC in reply to "Comment by primelight@live.com"
bryanv Member since:
2005-08-26

I installed Windows 2000 Pro on a P3 600E (coppermine) from the late 90's this past weekend.

The box had 384MB RAM, a TNT2, and 9GB drives on Ultra-Wide SCSI2.

I was seriously impressed with how fast, responsive, and darn smooth this box was. I had forgotten how smoothly win2k ran on machines like my old 400mhz K6-2.

So my question is what functional requirement can you do on Windows XP (or even vista) that you couldn't do in some way on Win2k? I'm not talking about eye-candy, funky interfaces, etc. I'm talking about what -kind- of application runs on Vista, but cannot, in any way, ever run on Win2000?

It's a function of time and engineering cost.

Photo management? Picasa.
Mp3 / Audio? iTunes 7.3.2 works on Windows 2000.
Games? This could be handled if they spent the time writing drivers / using technology other than directx. It's an engineering issue.

It seems to me that the core functions of the OS haven't expanded. It's what people expect the OS to do with the software pre-installed out of the box.

Photo management isn't an OS function.
Audio management isn't an OS function.

Could an OS provide hooks to handle such management? Absolutely, but that's not a core OS feature, and an OS that doesn't expose that by nature could have an application written for it that -does-.

I disagree. OS's today don't do a whole lot more than they did 10 years ago. In fact, most OS's today are just now catching up with what BeOS did 10 years ago.

Reply Score: 3

Where did he get those numbers?
by sukru on Sun 20th Apr 2008 16:25 UTC
sukru
Member since:
2006-11-19

Seriously, Java and .Net are 200 times slower than C? Where those numbers come from, where is the study?

For the reverse side, there already are: There was a story in OSNews a few years back that showed MS C# compiler actually produced faster code than GCC C++ compiler on Windows on the same scenario (back then). And simple Google searches find similar things for Java, too: http://kano.net/javabench/

I'm not saying Java or C# is faster than C++, and don't completely believe in these benchmarks. But I know that they are at least comparable. As far as you're doing the same work, the results are very similar.

But as mentioned above, if you actually start doing more more work, (as in remote procedure calls, automated web services, virtual methods, reflection, etc) your execution time increases, naturally.

In my opinion, OSNews should increase its requirements a little bit, require some proof in the articles linked, in order not to include these ones.

Edited 2008-04-20 16:27 UTC

Reply Score: 11

RE: Where did he get those numbers?
by alban on Sun 20th Apr 2008 17:00 UTC in reply to "Where did he get those numbers?"
alban Member since:
2005-11-15

It is not so much the speed of calculating a sieve that is the problem, it is the tendency for whole new frameworks to be layered on top of others almost endlessly.

Reply Score: 10

RE: Where did he get those numbers?
by phoehne on Sun 20th Apr 2008 22:09 UTC in reply to "Where did he get those numbers?"
phoehne Member since:
2006-08-26

It may not be 200 times slower, but it is slower. It's on average in the same order of magnitude as C/C++ across a series of benchmarks, which for most people is about the same speed. For a good idea of actual relative performance, see the Great Computer Language Shootout. The issue of speed being in the same order of magnitude doesn't matter for applications centered around small packets of work interrupted either by I/O or user think time. But even that depends on the application.

In graduate school I did quite a bit of Genetic Algorithm related classwork. This was purely compute intensive code. In some of the more interesting applications the fact that Java is 2x the speed of similar code under C means 2 weeks of runtime instead of 1 week of run time. If the code was largely computational, that didn't do a lot of I/O, I would use Java because it's a very productive language to work in. JDK 4/5 didn't seem to have the same I/O performance as straight C. Having no evidence, I always assumed this had to do with the layers of library code surrounding Java I/O.

If you start adding in issues like memory usage, the picture becomes murkier. Java requires significantly more memory than the equivalent C code. Depending on the version of the VM started up, the difference can be megabytes (for the VM and program) versus kilobytes (for the C code). The libraries layered on top of all this burn even more memory. For example, Rails using C Ruby versus JRuby. They both execute Ruby code, but JRuby deployed in a J2EE environment easily requires 512 of megs of memory to do what C Ruby does in about 20-50 megs of Ram.

Like anything in life, it's a trade off. Java is a very productive language with a huge world of libraries. For many businesses this represents a real cost savings for their custom, line of business applications. It is not the panacea that most Java developers pretend it is. In some cases the speed difference is real and can't be ignored. In some cases Java is actually faster than C due to run-time optimizations that C can't do (as it stands). However, it seems like we have to buy large, more capable servers (in some cases with 4 times the ram and 2 times the speed) just to do the same or a little bit more work than what we did last year. Part of that is the result of these huge frameworks that are not intrinsic to Java but come along for the ride (weather we want them or not).

Reply Score: 11

timefortea Member since:
2006-10-11

But these days buying hardware with 4 times the memory and twice the speed can be cheaper than getting the same app developed in C/C++ ...

Reply Score: 2

headius Member since:
2008-04-23

They both execute Ruby code, but JRuby deployed in a J2EE environment easily requires 512 of megs of memory to do what C Ruby does in about 20-50 megs of Ram.


You're making that up. JRuby actually uses LESS memory than C Ruby. JRuby 1.0 might have used about the same or a bit more, but we've never used ten times as much memory. And JRuby 1.1 consistently uses less memory than C Ruby. Or are you referring to an entire Rails app under JRuby with ten instances (to be able to handle ten concurrent requests) versus a single C Ruby instance (which can only handle one?) Both JRuby and C Ruby must start up multiple instances to handle concurrent requests in Rails. The difference is that JRuby does it automatically (configurable) and C Ruby does not. When you run both with the same number of instances, JRuby will win every time.

Check your facts please before you post comments like this.

Reply Score: 2

RE: Where did he get those numbers?
by rhyder on Sun 20th Apr 2008 23:11 UTC in reply to "Where did he get those numbers?"
rhyder Member since:
2005-09-28

Many of us had our first go at running Java desktop apps back in the mid 90s. Unfortunately, Swing was very slow at that point and this created the myth, in the mind of many people, that Java is slow.

I think that Java could have made massive inroads into the desktop if only Sun had worked harder to polish Swing performance in the early days.

Reply Score: 4

sbergman27 Member since:
2005-07-24

Today, Java desktop apps are not exactly slow. But the slowness has been replaced by a tendency to be horrendously irritating. Download LimeWire and you will quickly become afraid to move your mouse. Because every time you move it, say, a quarter inch, you get a new and annoying popup giving you all the gory details about another song you didn't really care about, and blotting out your view of something that you *did* care about.

Reply Score: 1

Matt Giacomini Member since:
2005-07-06

A bad UI can be designed in any language.

Reply Score: 4

BTrey Member since:
2006-03-27

The only java based, full fledged application I have any experience with is Eclipse and my experience has been that Eclipse is significantly and noticeably slower that other IDEs. It's usable on a fast, modern system with lots of memory but I have Linux loaded on a couple of older, slower boxes with half a gig of memory and using Eclipse on them is downright painful.

I'm also not sure I agree with either the original article or the response above when it comes to operating systems. I use a computer that is on the military's NMCI network on a daily basis. Because of government requirements, it's still running Windows 2k. It's not noticeably faster than similar machines running XP, but neither do I notice any significant decrease in capability.

Reply Score: 1

danieldk Member since:
2005-11-18

I'm not saying Java or C# is faster than C++, and don't completely believe in these benchmarks. But I know that they are at least comparable. As far as you're doing the same work, the results are very similar.

It depends on the application, but I largely agree. I wrote a fairly CPU-intensive natural language processing application in both C++ and Java, and the C++ implementation was about 10% faster. Of course, this is in no way a scientific benchmark, but it shows that it does not matter that much (especially compared to Java sans JIT). Of course, the same application is awfully slow in Python or Ruby. Which enforces the old mantra: use the right tool for the job.

Of course, when comparing C++ and Java/C# there are much more interesting facets. For instance, language features like const-correctness, operator overloading, and templates are things I very much prefer in C++.

In this respect, I am very interested in the development of Digital Mars D, because aims to provide a good middle road between C++ and more dynamic languages.

Reply Score: 2

RE: Where did he get those numbers?
by khaledh on Tue 22nd Apr 2008 02:28 UTC in reply to "Where did he get those numbers?"
khaledh Member since:
2007-03-30

I agree that broad claims such as the one in the post should not be made without a substantial study backing the claim.

That said, I can say that I've written a small data mining app for finding nearest neighbors in a dataset of 1M records. I did the implementation once in C++ using STL, and once in C#. The two implementations took nearly the same time to finish (on average over multiple runs).

That to me says that .NET is not slower than C++ when it gets to number crunching. A different type of app may behave differently though.

Reply Score: 1

Right Tool for the task
by darkstego on Sun 20th Apr 2008 16:42 UTC
darkstego
Member since:
2007-10-26

I hope I don't sound like a broken record here, but I think developers should use the best tool for the task. Contrary to what Microsoft or Sun want you to believe, neither .Net or Java is the be-all end-all of programming languages.

I think there needs to be a balance between programming ease and efficiency, were the most processor intensive tasks are coded in C or C++, where simpler features are implemented in higher level languages. A good example of this is Amarok where most of the application is coded in C++, but the lyrics engine was changed from C++ to Ruby. Does that make Amarok slower? I very much doubt it. But it does make it tons easier to develop, maintain and expand.

On a side note, while the argument made against the recent versions of office/windows being slow compared to older version is valid, the blame may not entirely be due to coding inefficiency. The extra features and backwards compatibility have something to do with it as well.

Reply Score: 13

RE: Right Tool for the task
by kaiwai on Sun 20th Apr 2008 16:56 UTC in reply to "Right Tool for the task"
kaiwai Member since:
2005-07-06

I hope I don't sound like a broken record here, but I think developers should use the best tool for the task. Contrary to what Microsoft or Sun want you to believe, neither .Net or Java is the be-all end-all of programming languages.


True, but at the same time, I do think that the one thing .NET has as its advantage is the ability to for C++ (and others) to move their code over to .NET and retain their skills - without needing to re-learn everything. That is the one problem with Java, it requires you throw all your existing skills out the window. Its just not viable to take that approach.

One also has to acknowledge that .NET is more than just a competitor to Java, it is also a competitor to win32, it is ultimately going to be the future of development on Windows.

I think there needs to be a balance between programming ease and efficiency, were the most processor intensive tasks are coded in C or C++, where simpler features are implemented in higher level languages. A good example of this is Amarok where most of the application is coded in C++, but the lyrics engine was changed from C++ to Ruby. Does that make Amarok slower? I very much doubt it. But it does make it tons easier to develop, maintain and expand.


True; personally, I think that programmer ease should be at the top though; if something can be better implemented, better maintained and less issues crop up (memory management), it should result in more reliable products. When it is easier for the programmer to do his or her work, the less likely they are to make mistakes. I'd sooner have less 'teh snappy' in favour of more stability and security.

On a side note, while the argument made against the recent versions of office/windows being slow compared to older version is valid, the blame may not entirely be due to coding inefficiency. The extra features and backwards compatibility have something to do with it as well.


Well, for me, I think, if Microsoft ported the whole Office suite to .NET, and made .NET available on Windows and Mac OS X - then I think that the performance price would be worth paying. Although one would want to be optimistic, personally, I think that Microsoft is running out of things to add and/or change to Office to make it worth while upgrading to the next version.

Reply Score: 8

RE[2]: Right Tool for the task
by renhoek on Sun 20th Apr 2008 21:39 UTC in reply to "RE: Right Tool for the task"
renhoek Member since:
2007-04-29

True, but at the same time, I do think that the one thing .NET has as its advantage is the ability to for C++ (and others) to move their code over to .NET and retain their skills - without needing to re-learn everything.


This is NOT an advantage, different platforms require different approaches and therefore different skills. I am currently maintaining a asp.net website which is coded by a guy who obviously loves cgi (everything is writelined and no inheritance was used, ever.). This is horror for me, and everyone after me. I'm rewriting the code now so it uses controls like asp.net was designed to do. Use the right tool for the job and use it right (most people forget that last part).

Reply Score: 5

RE[2]: Right Tool for the task
by Matt Giacomini on Mon 21st Apr 2008 04:48 UTC in reply to "RE: Right Tool for the task"
Matt Giacomini Member since:
2005-07-06

If you already know C++ you are hardly throwing all your skills out when you move to Java.

I have worked in C++, C#, and Java and I didn't find moving from C++ to Java any harder then moving from C++ to C#.

Edited 2008-04-21 04:50 UTC

Reply Score: 2

RE[3]: Right Tool for the task
by Doc Pain on Mon 21st Apr 2008 18:04 UTC in reply to "RE[2]: Right Tool for the task"
Doc Pain Member since:
2006-10-08

If you already know C++ you are hardly throwing all your skills out when you move to Java.


You're right. The way from C to C++ may be a bit complicated, but from C++ to Java it isn't that hard. The most important thing when you've learned C++ isn't the language itself - your shills usually are OO-oriented, and you recognize the means of the language, its contructs, its grammar and so on. This knowledge is mostly very generic (!) and you can use it with any further language you like.

A good programmer isn't a person who knows one language from the inside and from the outside, but he's a person who can translate a given problem into an alrorithm, and then map this algorithm onto the desired programming language (or best language for the given task).

Reply Score: 2

Comment by sonic2000gr
by sonic2000gr on Sun 20th Apr 2008 16:43 UTC
sonic2000gr
Member since:
2007-05-20

if BeOS (or the Amiga, or whatever) had been allowed to continue its development, reaching feature parity with the likes of XP/Vista and Mac OS X - would it still be as lean, slim, and fast like we remember it now?

I doubt it.


I doubt it too. By the moment they would reach whatever level of completeness they would set to, they would be bloated and "slow" in their own way.
And yes, I would never compare Win95 with a todays OS. Win95 would run one app fast, what about running 10 apps fast, or 20 apps, 3 compiles, 5 web sites and 3 databases? What about crashes? Speed is not everything. You can never actually forget about the computer and focus on the job, until your computing experience becomes reliable.

Reply Score: 2

RE: Comment by sonic2000gr
by Counsel on Fri 25th Apr 2008 20:16 UTC in reply to "Comment by sonic2000gr"
Counsel Member since:
2005-08-09

I agree that an "older" OS might not be able to multitask well. However, it might also be cheaper and actually manageable to move some of those apps to other computers..

I don't do lots of programming, but do many of us run 20 apps, 3 compiles, 5 browser windows, and 3 databases at one time on one computer? Not only must we have the right OS for the job, but the right hardware.

You could run the some of those on another computer connected to a network (e.g., databases on a server) and off-load that "load" from the "desktop."

Opinions are just that, and we should remember they are very subjective. Regardless whether one is faster or not, I might still program in X. If my program in X "works" for many, it is irrelevant that it was written using X or for the Y platform. Remember, when did the "user" ever sit and time two different apps to see which was faster to determine which software program they used? Speed is one issue, there are many others.

As has been said regarding classical guitar playing, "Faster playing is just that...faster." I would say that faster isn't always better--before you argue, can other hardware or software do something you do on yours (hardware and software) faster? If so, why aren't you switching?

Just my 2 cents.

Edited 2008-04-25 20:19 UTC

Reply Score: 1

Comment by elrod
by elrod on Sun 20th Apr 2008 16:53 UTC
elrod
Member since:
2006-11-15

The original author from geek.com could save a lot of CPU cycles by first thinking about questions like "why are people using virtual machines?" BEFORE writing an article.
Missed opportunity.

Reply Score: 6

He's making the wrong point
by tristan on Sun 20th Apr 2008 17:08 UTC
tristan
Member since:
2006-02-01

Yes, managed code executes slower than native code, but in a great many cases, this really doesn't matter. GUI programmes spend the majority of their time doing nothing, waiting for user input. Sending some data across a network could easily take several orders of magnitude longer than processing that data, even if you're using managed code. And in the cases where it does matter, most managed languages make it easy to call native code to do the hardcore number-crunching.

No, the problem with managed environments is memory use. A poorly coded native app can eat memory (hello, Firefox!), but with managed apps memory use really is beyond the joke. Banshee, written in C# and running on Mono, is currently using 75MB on my system. Azureus, written in Java, is using 92MB. Memory use for Beagle (C#/Mono) is more than ten times that of Tracker (C), in my experience. And this just gets worse when running in a 64-bit environment.

Lastly, I have to take issue with this from the article:

And all because the real software developers out there, the ones who can program in assembly, or C or even C++ (though even with C++ you begin to lose so much over C, about 10% in performance or more)


Without wishing to start a flame war, it's simply not true that C++ is slower than C. In fact, if you let the compiler get clever with templates, it can sometimes be faster.

Reply Score: 18

RE: He's making the wrong point
by AndrewDubya on Sun 20th Apr 2008 18:04 UTC in reply to "He's making the wrong point"
AndrewDubya Member since:
2006-10-15

Awesome post! You are absolutely right. And to echo what everyone else has said, use the right tool for the job.

Two additional points:
1. Writing software is becoming increasingly complex. Lots of VMs and scripting languages remove the complexity and give benefits that are much more important than processing speed: added security, development speed, etc.

2. Similarly, programming is increasingly more accessible. People value dev time over CPU time (and claim they'll go back and "fix it later" ;-). I'm already afraid of what many programmers screw up in PHP... please don't suggest C to them!

Swig is always there for you if you need to make the code faster.

Reply Score: 3

RE: He's making the wrong point
by dagw on Sun 20th Apr 2008 22:06 UTC in reply to "He's making the wrong point"
dagw Member since:
2005-07-06

Without wishing to start a flame war, it's simply not true that C++ is slower than C.


Agreed. However C++ gives you many more options for shooting yourself in the foot and producing very slow code if you don't know what you're doing. So while it's certainly possible to write C++ code that is as fast as C, it isn't necessarily easy.

Reply Score: 8

RE[2]: He's making the wrong point
by gilboa on Mon 21st Apr 2008 15:02 UTC in reply to "RE: He's making the wrong point"
gilboa Member since:
2005-07-06

Agreed. However C++ gives you many more options for shooting yourself in the foot and producing very slow code if you don't know what you're doing. So while it's certainly possible to write C++ code that is as fast as C, it isn't necessarily easy.


... I call such code C-Plus-Minus.
C code rewritten in C++ with minimal mocking around. (No virtual functions, overloading, etc)

- Gilboa

Reply Score: 2

RE[3]: He's making the wrong point
by dagw on Mon 21st Apr 2008 16:37 UTC in reply to "RE[2]: He's making the wrong point"
dagw Member since:
2005-07-06

... I call such code C-Plus-Minus.
C code rewritten in C++ with minimal mocking around. (No virtual functions, overloading, etc)


The exact opposite approach can cause just as much performance problems, if not more. Things like creating a class when you could use a simple varible (I've seen plenty of classes which where essentially a complicated wrapper around a double), and then using dynamically allocated stl vectors containing members of that class when a statically allocated array of doubles would have worked just as well.

Admittedly many of these things can in many cases be defended as "good design" in some sense, and as such aren't always a bad idea. However you have to be aware that all these things come with a performance penalty. So you have to balance the good with the bad. One of the advantages with C++ is that you have this choice.

The problem is people hear unqualified statements like "C++ is just as fast as C" and think that any C++ code they write will always be 'fast' simply because C is 'fast'.

Reply Score: 2

gilboa Member since:
2005-07-06

I fully agree.
I work in a hybrid C/C++ group.
Kernel code and OS abstraction libraries are written in C while service wrappers and the actual applications are written in C++.

AFAICS OO seems to be drawing (good) programmers into a lot of data management and packaging games - read: a trivial pass-a-structure-pointer throughout your code (in C) problems - tend to turn into a nasty (and slow) data packaging and repackaging problem. (in CPP).

In our CPP developer's defense, they are very performance conscious and rarely wrap a single boolean variable with a 1000 line long class. (Hence, large chunks of their code is C-Plus-Minus).

... Just don't get me started about .NET/C#. (Other groups are using it; performance is horrible; switching back to CPP...)

- Gilboa

Reply Score: 2

RE: He's making the wrong point
by siimo on Sun 20th Apr 2008 23:07 UTC in reply to "He's making the wrong point"
siimo Member since:
2006-06-22

No you are wrong about the memory usage. Granted some managed apps may have this problem. But this is not necessarily true.

Memory allocation to managed code is managed by the runtime (.NET / Java etc) and it sometimes allocates memory to an application and does not reclaim it back until some other application on the system needs to use this memory, if the memory is unused then it keeps it so that things remained cached.

Yes some programs have memory leaks but *don't* go by what your task manager says, when checking the memory usage of managed code.

Reply Score: 2

RE[2]: He's making the wrong point
by SomeGuy on Mon 21st Apr 2008 03:23 UTC in reply to "RE: He's making the wrong point"
SomeGuy Member since:
2006-03-20

Even entirely disregarding memory leaks, a managed program will have a much higher memory footprint than a non-native compiled program. There are three main reasons for this:

1) A virtual machine with JIT means you *cannot* demand page. In other words, if you have 20 processes that each load the same 10 shared libraries which for the sake of argument are 10 megs in size.

In a VM you'll have 20*10 megs for each chunk of JIT'd code, plus the interpreter's memory overhead.

In a native compiled language, the OS is smart enough to share all the compiled code between all the processes, so you only use 10 megs for *all* the proceses. (Ok, this isn't an entirely accurate view, but it's a good first approximation.)

2) Garbage collectors run every now and then. This means that the memory usage piles up as you create and forget about objects. Sure, you don't leak memory any more, but you need room for the garbage waiting to be collected. Research papers [sorry, no links at the moment, but they're around] show that to be effective, a garbage collected system needs between 1.5 to 5 times the amount of RAM, depending on the exact collection algorithms and usage patterns of the program. The garbage has to sit around somewhere in the time between you finishing using it, and the GC kicking in and cleaning it up.

3) Finally, this isn't an intrinsic property of a VM environment, but the languages that run inside a VM tend to emphasize making lots of little objects. This leads to everything from lots of garbage being created (see point 2) to memory fragmentation causing increased heap size (or increased CPU time, if you have a compacting/copying GC).

So, while VM-based languages certainly have their place, pretending that they're as memory efficient as their natively compiled counterparts with manual memory allocation is rather a crackpot notion at this time.

Reply Score: 4

draethus Member since:
2006-08-02

In a VM you'll have 20*10 megs for each chunk of JIT'd code, plus the interpreter's memory overhead

What a load of rubbish. Java's been using class data sharing for a good while now, which shared memory between JVM instances, avoiding that very problem.

Reply Score: 2

'Nuff said?
by laserface on Sun 20th Apr 2008 17:25 UTC
laserface
Member since:
2008-04-07
Stupid
by evangs on Sun 20th Apr 2008 17:28 UTC
evangs
Member since:
2005-07-07

And all because the real software developers out there, the ones who can program in assembly, or C or even C++ (though even with C++ you begin to lose so much over C, about 10% in performance or more), are a dying breed.


I believe this statement that I've quoted from the article exemplifies how retarded that article is. The most obvious fact is that the author overstates the performance impact of the shift to more modern programming languages. Java and .NET hardly increase the runtime requirements by a factor of 200 as is implied by the author. Writing code in C does not automatically mean your code is fast and tight.

On the other hand, his statement about the performance penalties associated with the move to C++ form C is utterly ridiculous. Ignoring virtual functions and polymorphism, C++ is not slower than C. In fact, thanks to templates that are evaluated at compile time, it can frequently be faster. Just look at a comparison between std::sort vs qsort. For example, see http://theory.stanford.edu/~amitp/rants/c++-vs-c/ where the standard C library gets blown away by STL's sort. In fact, the STL sort beats the hand optimized C sort too.

C++ is not perfect. But in comparison with C, you get typesafe inlines (which replace #defines), safe strings, polymorphism, and template metaprogramming which when judiciously used greatly increase your program's execution speed. Yes, C++ is a few orders of magnitude more complex than C, but the gains in productivity and runtime performance are more than worth it.

In closing, the article is based on a false premise. Namely that the shift to type safe, sand boxed programming languages has caused a significant amount of bloat in software. To back his claim up, the author has just pulled a few numbers out of his arse that are so far off target that they only serve to make him look like an imbecile. Software bloat is there because users demand more features. Your software from the mid-90s will run faster, but they don't support Unicode, aren't anti-aliased, and don't sport the "Wow" effects that many users seem to like.

Reply Score: 9

RE: Stupid
by gilboa on Tue 22nd Apr 2008 05:54 UTC in reply to "Stupid"
gilboa Member since:
2005-07-06

I believe this statement that I've quoted from the article exemplifies how retarded that article is.


Simplified? Yes.
Retarded? No.

The most obvious fact is that the author overstates the performance impact of the shift to more modern programming languages. Java and .NET hardly increase the runtime requirements by a factor of 200 as is implied by the author. Writing code in C does not automatically mean your code is fast and tight.


I agree.

On the other hand, his statement about the performance penalties associated with the move to C++ form C is utterly ridiculous. Ignoring virtual functions and polymorphism, C++ is not slower than C. In fact, thanks to templates that are evaluated at compile time, it can frequently be faster. Just look at a comparison between std::sort vs qsort. For example, see http://theory.stanford.edu/~amitp/rants/c++-vs-c/ where the standard C library gets blown away by STL's sort. In fact, the STL sort beats the hand optimized C sort too.


The test is far too ancient and their hand-written code is not-what-I-call optimized, so I wouldn't put too much weight behind this benchmark.
It is true that CPP -can- be just as a efficient as C.
... But as I said in another post, the main problem is over-object-orientation. Even good CPP programmers tend to fall into the same usual pit-holes: turning trivial problems into massive class/data (re)packing issues.

C++ is not perfect. But in comparison with C, you get typesafe inlines (which replace #defines), safe strings, polymorphism, and template metaprogramming which when judiciously used greatly increase your program's execution speed. Yes, C++ is a few orders of magnitude more complex than C, but the gains in productivity and runtime performance are more than worth it.


C supports CPP-like in-lines just fine. Macros are no longer needed. (Though macros have the built in benefit of having access to your current stack)
Safe strings carry an additional over-head; I wouldn't use them if I was to design a high performance string manipulation system. (But that's me)
In my eyes, templates are a good example what good and bad about CPP: On one side it makes you're life far easier - while on the other, much like over-loading, it makes the code unreadable. (Go figure out what A++; means...)

In closing, the article is based on a false premise. Namely that the shift to type safe, sand boxed programming languages has caused a significant amount of bloat in software. To back his claim up, the author has just pulled a few numbers out of his arse that are so far off target that they only serve to make him look like an imbecile. Software bloat is there because users demand more features. Your software from the mid-90s will run faster, but they don't support Unicode, aren't anti-aliased, and don't sport the "Wow" effects that many users seem to like.


A. Office 2K/Windows 2K supported unicode just fine.
Heck, even Office 97 (!!!!) had preliminary unicode support.
B. Be that as it may, Office 2K7/Vista is -not- 50 times better then Office2K/Windows2K - and I refuse to believe that feature bloat is the only reason for it.

- Gilboa

Reply Score: 2

RE[2]: Stupid
by sanders on Tue 22nd Apr 2008 08:08 UTC in reply to "RE: Stupid"
sanders Member since:
2005-08-09

(...) much like over-loading, it makes the code unreadable. (Go figure out what A++; means...)


Aargh! If I hear this argument once more, I'm going to kill a kitten.

I will figure out what A++; means right after I have figured out what incr(A); means, OK?

http://www.curly-brace.com - Introduction to Computer Programming for the Scientifically Inclined

Reply Score: 1

RE[3]: Stupid
by gilboa on Wed 23rd Apr 2008 10:07 UTC in reply to "RE[2]: Stupid"
gilboa Member since:
2005-07-06

Aargh! If I hear this argument once more, I'm going to kill a kitten.


Ummm. Oh, OK.

I will figure out what A++; means right after I have figured out what incr(A); means, OK?


C doesn't mix data and code making it possible to understand what a certain code is doing (and even fix bugs) -without- having to learn (by heart) the data model beneath it.
A = B + C will always be scalar. (either integer or pointer math). Same cannot be said about CPP's A = B + C when at least one is a class.

I don't have a debugger; I depend on having to find bugs by reading the code. (Over and over and over again).
Good luck trying to do the same when you encounter a crash within STL.

http://www.curly-brace.com - Introduction to Computer Programming for the Scientifically Inclined


I assume that this insult/joke/trolling/what-ever is supposed to solidify your (weak) argument?

- Gilboa

Reply Score: 2

A matter of cost
by unoengborg on Sun 20th Apr 2008 17:30 UTC
unoengborg
Member since:
2005-07-06


We need people who are willing to spend the extra few weeks or months it would take to develop and maintain code using something that’s close to the machine.


Yes, that would be good, especially if they would work for free. It is also not a matter of weeks or months, it is more likely a matter of years for more complex applications. If you don't rely on frameworks you will be testing the same thing over and over again in each new application you write. Running in virtual machines makes it easier to develop to a more consistent target, and that too keeps costs down.

If development of an OS takes one, two, or five years may not be a big problem for an OS vender, as if he is a successful one, he will sell so many licences that development costs hardly matters, compared to e.g. marketing, but most software that is developed are not sold as schrink wrapped packages or bundled with new hardware.

The majority of all software developed is developed in house, and here only a few copies of the finished product will be made. Here it is absolutely essential to keep development costs down or that in house development project will get canceled very quickly.

Reply Score: 2

RE: A matter of cost
by theTSF on Sun 20th Apr 2008 18:33 UTC in reply to "A matter of cost"
theTSF Member since:
2005-09-27

DING DING DING You are correct!
Lets do some simple numbers. Average Programmer salary of $50,000+average benefits($15,000) so a programmer cost a company $65,000 a year so that is $31.00 an hour of Cost just for a developer who can do his work without extra overhead (management, power, parking, office space for his butt...)

So he could write his program in a higher level language say in one year. Or write it in a lower level language in one 1/2 year. Cosing $65,000 for an app vs. $97,500 with a $32,500 in cost of an application. Take into account that most of the system times are idle. So you the time performance that is noticeable to the end uses are not in factors of 200x but in 4x (if it can do the processing faster then the person can react to it then it doesn't count). So for $32,500 you now have a programer as far as the end users care the apps runs 4 times faster. Or you can spend an additional $20,000 and upgrade the servers to run 4 times faster and you still better off.

Code performance is really an academic topic. And for most apps not an issue unless it takes a noticeable amount of time to process.

Reply Score: 5

Lightweight Distros.
by Flatland_Spider on Sun 20th Apr 2008 17:48 UTC
Flatland_Spider
Member since:
2006-09-01

Why didn't he point out the lightweight/old PC Linux distros like Puppy or DSL? DSL works on my ancient 233Mhz Pentium 2 with 64MB of RAM. They're not as full featured as some other distros (Apps), but I'm not not asking for much with that thing.

Reply Score: 3

JonathanBThompson
Member since:
2006-05-26

So I'll cover this one: whatever you think about how good/bad/ugly Win9x was, it was NOT merely the same thing as Windows 3.1 that ran over an old DOS shell. Certainly, it had a lot of backwards compatibility with older MS-DOS, and that was both the big advantage and disadvantage, and also allowed it to run on older machines with a lot less memory, because the code they had running in 16 bit code was a lot of known-working smaller, more optimized (and less portable) 16 bit code that also allowed a lot of backwards compatibility with 16 bit Windows applications. This also included the backwards compatibility advantage/disadvantage of the Win16Mutex (I forget the official formal name) that kept more than one process from doing things at the same time that wouldn't be backwards compatible due to the model of how Windows 3.1 applications worked with cooperative multitasking. Of course, the very obvious downfall of this backwards compatibility mutex was that if one process was ill-behaved and had something happen while it held it, the whole system was effectively dead in the water.

The curious thing is that there have been benchmarks done with the Windows variants (NT and Win9x) that demonstrate for a lot of the threading primitives, Windows NT was faster! However, because of how things were (at that time) outside the kernel for GDI in NT, Windows 9x was likely more responsive on the same hardware, as long as the applications were well-behaved. The reality is that what the MS-DOS shell he claims was what it ran under, was really more of a boot loader to bootstrap Windows 9x, and once its job was done, it was gone. Even though from a technical point for stability, etc. that NT was better to run than Windows 9x, Windows NT simply wouldn't run (more like a slow execution) on low-end hardware nearly as well, because lower-end hardware didn't have the larger CPU caches, and didn't have enough RAM to keep as much of the larger system in the active working set: thus, it was a more practical tradeoff (besides the backwards compatibility issue) for the standard consumer that insisted on running all or most of their old stuff as-is (WoW wasn't able to do a lot of the older stuff).

Reply Score: 7

biffuz Member since:
2006-03-27

Very right. I didn't expect a statement suck "Windows 9x was based on DOS" on a serious OS site like this.

There's another point: looking like being based on DOS was helpful for the home/lowend market (people felt safe about their old DOS games and apps), but scared away the corporate market and pushed it to the more pricey NT. Cool commercial strategy.

Reply Score: 2

edwdig Member since:
2005-08-22

The reality is that what the MS-DOS shell he claims was what it ran under, was really more of a boot loader to bootstrap Windows 9x, and once its job was done, it was gone.

That's not really true. If you check the Caldera antitrust lawsuit against MS, you can see evidence against it. They modified DR-DOS to identify itself as MS-DOS 7 and to log all calls to int 21h (basically, calls to ask DOS to do something), then ran Win95 on top of this. They found that DOS was still used rather extensively - DOS was used for pretty much everything it was capable of doing.

Reply Score: 1

NeXTSTEP -> Mac OS
by NicolasRoard on Sun 20th Apr 2008 18:05 UTC
NicolasRoard
Member since:
2005-07-16

However, I can't help but wonder: if BeOS (or the Amiga, or whatever) had been allowed to continue its development, reaching feature parity with the likes of XP/Vista and Mac OS X - would it still be as lean, slim, and fast like we remember it now?


In fact, we do have an example of such a thing: MacOS X. Remember, it's directly based from NeXTSTEP -- which ran perfectly fine on a motorola 68030 machine !
It was even using DisplayPostScript, ie true wysiwyg, etc.

So what would have happened with BeOS or Amiga ? the exact same thing. And I don't think we have a bad deal out of it -- it's not like there is no added functionalities gained with the increase in CPU power.

Reply Score: 2

I would agree
by vtolkov on Sun 20th Apr 2008 18:35 UTC
vtolkov
Member since:
2006-07-26

There is a term of theoretical limit. If we need to add two numbers, theoretical limit is one instruction. If we need to convert video from one format to another, it will be much more, we can calculate from mathematical algorithm. Everything above is just a practical cost. Any level of virtualization adds more into this overhead. It is good, if it adds some benefits, not just an overhead. In most cases, it does not, it just takes your processor speed, your memory, battery life, paying out developer's laziness.

Reply Score: 2

Wrong diagnosis
by sergio on Sun 20th Apr 2008 18:51 UTC
sergio
Member since:
2005-07-06

The problem isn't the language, the real problem is "featurism".

http://catb.org/jargon/html/C/creeping-featurism.html

Reply Score: 3

Same thing is going on with the web
by -APT- on Sun 20th Apr 2008 18:57 UTC
-APT-
Member since:
2007-03-20

Instead of using a typical mail application to download your mail, many people are using fancy webmail clients to view the mail they've got.

Yes there are many advantages to having web based applications, but the same disadvantages occur which are mentioned in this article - it isn't the most efficient way of running an application! Although we're seeing great improvements with the latest browser, it isn't going to be as efficient as running a native application designed for the task on your own machine.

Reply Score: 2

Premature Optimization
by rajj on Sun 20th Apr 2008 18:58 UTC
rajj
Member since:
2005-07-06

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. -- Knuth

Reply Score: 4

Article auithor misguided
by StaubSaugerNZ on Sun 20th Apr 2008 19:26 UTC
StaubSaugerNZ
Member since:
2007-07-13

The article's author is quite misguided. The slowness of the Windows line of operating systems isn't due to modern languages, since they are not written in Java nor C#. It's the extra features that slow things down (flying file animations, DRM everywhere). However I have noticed that even Windows XP is very slow (~10 mins) compared to Ubuntu (~30 sec) when copying many small files from a usb drive. I don't know why this is but it affects me every time I copy my codebase between different client machines. Vista is reputed to be even slower - and it doesn't make sense why Windows should be slower in this case. Whatever the reason may be, it is definitely not due to the use of managed languages.

In my own work I've used C#.NET and it is quite nice and reasonably fast. Certainly not 200 times slower than C. I use C for interfacing with hardware but much prefer a 'managed' language for the productivity boost and the wealth of libraries they have. In fact my preference is Java. Apart from a few syntactic goodies that C# has that Java does not (which is not a major factor in my personal selection between the languages) I much prefer Java as it runs on far more targets and platforms than C# (since even the Mono implementation doesn't have as mature a set of supporting libraries as Java).

As far as performance goes, Java is close enough to C++ for my uses (a lot if image processing) that it doesn't matter. In fact, with Java 6u10 all graphics using Java2D are now hardware accelerated using Direct3D pixel shaders. You don't have to change a single line of code to benefit from this. Nice. So Java would handily beat anything written in C/C++ that didn't use OpenGL or Direct3D (which would require more developer effort to get going).

Reply Score: 2

RE: Article auithor misguided
by Tom K on Sun 20th Apr 2008 20:09 UTC in reply to "Article auithor misguided"
Tom K Member since:
2005-07-06

"DRM everywhere" slows things down in Windows? Really?

1. Show me where the DRM is -- specifically ("everywhere" isn't going to cut it)
2. Prove that whatever DRM system there is in Windows actually slows things like UI and file copying down

Reply Score: 2

StaubSaugerNZ Member since:
2007-07-13

"DRM everywhere" slows things down in Windows? Really?

1. Show me where the DRM is -- specifically ("everywhere" isn't going to cut it)
2. Prove that whatever DRM system there is in Windows actually slows things like UI and file copying down


I can't show you as I don't have the source. I was speculating, since for DRM to work it must at least check whether any given file requires protection or not, which is an extra operation not required on systems that don't have DRM.

Oh, I got it. You might have read my post as implying that all files in Vista have or require DRM. Apologies for being unclear, I meant that *all* file operation (and graphics, and sound) paths must consider whether DRM must be applied or not - otherwise DRM could be circumvented. Has this clarified my original meaning?

Reply Score: 4

RE[3]: Article auithor misguided
by Tom K on Mon 21st Apr 2008 19:29 UTC in reply to "RE[2]: Article auithor misguided"
Tom K Member since:
2005-07-06

That determination is made once, and it's a simple operation of the "open" stage of the media layer.

I don't see how that will make any perceivable difference in responsiveness.

Reply Score: 2

StaubSaugerNZ Member since:
2007-07-13

That determination is made once, and it's a simple operation of the "open" stage of the media layer.

I don't see how that will make any perceivable difference in responsiveness.



That would be logical, but are you absolutely 100% sure that is what happens and that the check isn't made more than once (it is common for the anti-piracy protection in games to check in several places)? However, if DRM isn't affecting code-paths in Vista then what is the reason for its glacial performance relative to XP?

Reply Score: 1

RE[3]: Article auithor misguided
by tomcat on Tue 22nd Apr 2008 05:47 UTC in reply to "RE[2]: Article auithor misguided"
tomcat Member since:
2006-01-06

""DRM everywhere" slows things down in Windows? Really? 1. Show me where the DRM is -- specifically ("everywhere" isn't going to cut it) 2. Prove that whatever DRM system there is in Windows actually slows things like UI and file copying down
I can't show you as I don't have the source. I was speculating, since for DRM to work it must at least check whether any given file requires protection or not, which is an extra operation not required on systems that don't have DRM. Oh, I got it. You might have read my post as implying that all files in Vista have or require DRM. Apologies for being unclear, I meant that *all* file operation (and graphics, and sound) paths must consider whether DRM must be applied or not - otherwise DRM could be circumvented. Has this clarified my original meaning? "

I think you're overloading/confusing several different concepts as "DRM". At the driver level, Windows supports EFS (Encrypting File System), which enables the operating system to transparently encrypt files as they're written to disk -- and then decrypted on read. This capability is transparent to any application running in a given user session. Few people tend to run with EFS enabled, in my experience; usually, it's folks who use a mobile device (eg. notebook pc), and want to make sure that their data doesn't fall into the wrong hands, if the notebook is lost or stolen. There's certainly a non-zero perf hit associated with encrypting and decrypting data on-the-fly; some have suggested on the order of 5-20% slower. But EFS isn't turned on by default. Various types of media -- such as DVDs, WMA, WMV, Office documents, email, etc -- require DRM to decrypt their content streams. But few content streams actually have this kind of encrypted data. So, really, what are you talking about here? Is this just something that somebody told you -- or do you have an actual complaint?

Reply Score: 2

StaubSaugerNZ Member since:
2007-07-13

So if Vista isn't affected by DRM paths (which is the major change between itself and XP) why are the file operations so damned slow? Perhaps you are right and its not DRM that's the cause, but then the alternative is a worse scenario - that the Microsoft Vista development team (as smart as they are individually) produced very poorly performing code for either the file manipulation layer or the Microsoft-supplied USB block device driver. As I mentioned before, I notice on a daily basis that Linux is a factor of 8 times faster when copying a few thousand smallish (source code) files from a USB thumbdrive (10 minutes vs 30 seconds, it makes a difference!). Windows XP ain't much better, but it is a little better than Vista.

Reply Score: 1

RE[5]: Article auithor misguided
by tomcat on Tue 22nd Apr 2008 20:49 UTC in reply to "RE[4]: Article auithor misguided"
tomcat Member since:
2006-01-06

So if Vista isn't affected by DRM paths (which is the major change between itself and XP) why are the file operations so damned slow? Perhaps you are right and its not DRM that's the cause, but then the alternative is a worse scenario - that the Microsoft Vista development team (as smart as they are individually) produced very poorly performing code for either the file manipulation layer or the Microsoft-supplied USB block device driver. As I mentioned before, I notice on a daily basis that Linux is a factor of 8 times faster when copying a few thousand smallish (source code) files from a USB thumbdrive (10 minutes vs 30 seconds, it makes a difference!). Windows XP ain't much better, but it is a little better than Vista.


Slow file transfers is a known problem on Vista. One of the reasons is that MS implemented a feature called Remote Differential Compression, where it basically tries to "diff" the file on both sides to see whether it needs to send the data. Some people claim that you can speed up file transfers by turning off this feature. Read this article:

http://www.windvis.com/how-to-fix-the-slow-file-transfers-problem-i...

Reply Score: 2

sbergman27 Member since:
2005-07-24

Slow file transfers is a known problem on Vista. One of the reasons is that MS implemented a feature called Remote Differential Compression, where it basically tries to "diff" the file on both sides to see whether it needs to send the data.

Meanwhile rsync continues to be preternaturally fast. :-)

Reply Score: 2

StaubSaugerNZ Member since:
2007-07-13

Slow file transfers is a known problem on Vista. One of the reasons is that MS implemented a feature called Remote Differential Compression, where it basically tries to "diff" the file on both sides to see whether it needs to send the data. Some people claim that you can speed up file transfers by turning off this feature. Read this article:

http://www.windvis.com/how-to-fix-the-slow-file-transfers-problem-i...


Thanks for that bit of info. Why is file copying also relatively slow on XP? Is the directory re-read each time a new file is to be copied?

Reply Score: 1

Also to factor in
by stestagg on Sun 20th Apr 2008 19:29 UTC
stestagg
Member since:
2006-06-03

Also to factor in is the time it takes to initialise the virtual machine.

The initialisation overhead for native applications in windows is not noticeable, or easily detected.

.Net applications can take up to a second extra to initialize. UI latency in the order of 1 second is a big problem to most people.

Reply Score: 4

RE: Also to factor in
by StaubSaugerNZ on Sun 20th Apr 2008 20:12 UTC in reply to "Also to factor in"
StaubSaugerNZ Member since:
2007-07-13

Also to factor in is the time it takes to initialise the virtual machine.

The initialisation overhead for native applications in windows is not noticeable, or easily detected.

.Net applications can take up to a second extra to initialize. UI latency in the order of 1 second is a big problem to most people.


Agree. This seems to be what keeps Java applets out of common use in the browser (the JVM startup time can be horrendous, but the applet performance is very good once running). Again, Java 6u10 tries to address this by running a process called jqs.exe (for Java 'quickstarter' I believe) on Windows startup. I haven't yet tried to see if this makes much of a difference to startup performance, has someone else quantified the performance improvement?

Of course when measuring the startup performance between the Java/C# virtual machines and native applications one has to take into account the fact that the native C library (libc or equivalent) is already in memory so no (expensive!) disk access is required. So perhaps the jqs process may level the playing field somewhat.

Edit: fixed typo 'tartup time' to 'startup time' (the difference is 40 mins vs 1 second, if my wife is to be used as a benchmark, LOL).

Edited 2008-04-20 20:24 UTC

Reply Score: 1

RE[2]: Also to factor in
by stestagg on Sun 20th Apr 2008 20:30 UTC in reply to "RE: Also to factor in"
stestagg Member since:
2006-06-03

Except that now I have the jqs, .Net vm, VCRT, adobe PDF vm and various other libraries loading themselves and performing maintenance all together, while I am trying to use my computer for real work.

Reply Score: 2

RE[3]: Also to factor in
by StaubSaugerNZ on Sun 20th Apr 2008 20:56 UTC in reply to "RE[2]: Also to factor in"
StaubSaugerNZ Member since:
2007-07-13

Except that now I have the jqs, .Net vm, VCRT, adobe PDF vm and various other libraries loading themselves and performing maintenance all together, while I am trying to use my computer for real work.


The slowdown should be negligible if they are not in use (since they'll be 'sleeping'). What they really hog is memory. Fortunately, memory is exceedingly cheap these days and getting 2 or 4 GB does not cost an arm and leg like it used to (although I've seen corporates and governments still try make developers work on 512 GB, which is not properly thinking through the economics on their part).

Reply Score: 1

RE[4]: Also to factor in
by jlarocco on Mon 21st Apr 2008 08:19 UTC in reply to "RE[3]: Also to factor in"
jlarocco Member since:
2005-09-14

Until the price of RAM is 0, or you're offering to buy it for me, stop using that argument.

Reply Score: 1

Missing the 'big parts' of performance
by Yamin on Sun 20th Apr 2008 19:57 UTC
Yamin
Member since:
2006-01-10

The debate between C, C++, C#, Java... is really a non issue. I'm an embedded programmer, but I've done my fair amount of Java/C# programming. The managed languages are fine. The only real performance issues I have with them is the startup time. Once they get up and running, they're almost as good as native applications.

Even with the managed languages, there's a great deal of skill involved to make it efficient. I'd argue a good C programmer is perfectly suited for writing managed apps because they can 'guess' how it is really implemented in the back end. They're more likely to choose the right storage container. They're more likely to use object pools when needed...

Interpreted languages are another basket all together. Here, the performance gap is definitely substantial.

I think we really need to look at the overall system to see where the slow downs happen.
1. We have lots of monitoring apps these days (virus scanner, firewalls, file protection, file searching, spyware...). All this takes load of resources that has little to do with the language chosen.

2. The layering effect...especially of the internet.
As someone else pointed out (framework built on top frameworks...)
Not only do you have a web browser build on its own framework, you then have another layer of web languages (flash, java script...)

3. Most of all, software does a heck of lot more.
Everyone now expect predictive text input, fancy GUIs...

Reply Score: 3

My 2 cents
by suryad on Sun 20th Apr 2008 20:04 UTC
suryad
Member since:
2005-07-09

I think that nowadays frameworks are very important since they abstract out a lot of thigns that the coder would otherwise have to code. That 'layer' results in a loss of performance. But if you really use a well coded .NET/JEE app it works great and runs great! Look at Ebay for example. A lot of it is written in Java with their homegrown frameworks and all that. I think that site considering its complexity is pretty amazing in terms of features and performance that is being offered to the end user.

From my experience and my opinion of course, but frameworks take care of a lot of things that allow apps to horizontally scale and keep transactions in proper states etc etc. There are of course different classes of apps, stuff that I call desktop apps like say Photoshop, Picasa, Azureus, uTorrent etc. Those are what the typical joe would use and I think then using different languages have different benefits.

Azurues follows the write once run anywhere principal. That is why they went with Java. uTorrent went for the lightest and highest performance that is why they did not go for Java but the tradeoff is that the developers will have to do have their source trees different for different OSes and have different builds and such a problem would not exist if it was written in Java as an example.

Java is definitely not as slow as most people claim it to be. It really is very very performant (Ebay as an example). Its mostly in how people code. Sure it has its shortcomings in that it is overly verbose and C# has a lot of features that Java has implemented poorly but that is a different discussion.

Another thing to consider is that different languages are meant to tackle different problem. For example you do not need the heavy JEE framework if you are building a simple ecommerce site you would be better of using php, mysql and apache unlike if you are building an enterprise level webapp and then you would need java/jee, oracle db, and weblogic as an app server. You will see that the former example probably will not scale too well once millions of users start logging on. Thats where you need all the concurrency features and stuff that those Java/JEE frameworks provide. Sorry for the long post. Hope I made some sense in conveying my opinion. Cheers!

Reply Score: 3

This is wrong
by trenchsol on Sun 20th Apr 2008 22:06 UTC
trenchsol
Member since:
2006-12-07

I am typing this on Linux with IceWM window manager. It is very fast and responsive. If I choose to run KDE application, that might be a bit slower.

Yes, under some circumstances, I could do more with KDE than without it. It does not necessarily meant that KDE must be a part of the operating system. I need to have choice to install it or not. KDE relies on dynamic libraries which could be added or removed as needed. It works on different kernels and OS's. That is the ONLY TRUE WAY to build software.

How does it translate to Windows world ? In Windows world I always must install their equivalent of KDE. The conclusion is that systems like Linux, Solaris and BSD are flexible, and Windows are not. That is the main selling point of those systems.

Doom 3 was one of the first games that required W2k or XP. Until then dedicated gamer could have fun with cheaper PC configuration.

Reply Score: 4

Misleading
by TheUnfocusedOne on Sun 20th Apr 2008 22:40 UTC
TheUnfocusedOne
Member since:
2008-04-20

The article fails to mention technologies such as Just-In-Time (JIT) compiling, adaptive optimization, escape analysis, garbage collection, etc.

While some of these can work with native compiled languages, they're more suited for managed code.

You can get a lot of benefits from this, beyond just a shortened development time and more robust code. Some of the newer techniques associated with JIT can actually result in managed code that can run FASTER than native code, depending on how its used.

Reply Score: 2

The author doesn't know about Master Foo
by samad on Sun 20th Apr 2008 22:40 UTC
samad
Member since:
2006-03-31

Apparently the author hasn't exposed himself to good programming practice expoused by Master Foo:
http://www.faqs.org/docs/artu/ten-thousand.html

Reply Score: 4

The relative speed of bloatware
by phoehne on Sun 20th Apr 2008 22:52 UTC
phoehne
Member since:
2006-08-26

http://www.infoworld.com/article/08/04/14/16TC-winoffice-performanc... quantified the actual performance of Vista, 2000 and XP with Office. The interesting point, that this author almost gets to, is that we're using computers that are much faster, with loads more memory, to do the same work at the same speed. Is the operating system doing more? Yes, it almost certainly is, at some level.

I remember when spell checking was something you did to the file separately from word processing because word processors lacked built in spell checkers (you can only do so much in 64 k). Now the word processor does on the fly spell checking and grammar checking, including scanning your document for things you might be miss-spelling or for common tasks you might try to perform. However, I would argue that since office 2000, most people use the word processor in almost exactly the same way.

On benchmarks Java and .NET are the same order of magnitude as C/C++. (See Computer Language Shootout). In real world applications I've noticed a big difference between what happens when I fire up native versions of software versus Java versions of software (or even .NET). I hear the fans spin up and (GUI code especially) is not as responsive. In addition, you often notice the difference in memory consumption. I'll have to say I've had similar experiences with Python GUI applications.

We now live in a world where a Gig of Ram is the starting point for a new machine. Whereas we had 32 bit 500 Mhz CPU's 9/10 years ago with 64 Megs, we now have 2-4 64 bit cores at 2-3 Ghz. Busses are faster, memory is faster, networking, and even disks much faster. Despite all these tremendous advancements, it's all been eaten up in terms of the day to day work we do and I'm not sure anyone can really say where it's gone.

Let me put it another way. These OS'es and apps are doing more work. At the very lease they're spinning up our CPUs, consuming memory and eating up hard drive space. It may be library bloat. Certainly everyone comes out with a library to do some thing (including competing libraries) and when they write it they make sure any possible contingency is handled. Where 1000 lines of code would have been adequate for 95% of users, 2000 lines were used to make the library useable for 99% of users. Maybe part of it is real world performance differences between a Java database front end versus a native database front end. The result is the same - lots more capacity but little outside of games are that much faster.

Reply Score: 2

GUI applications
by shermamic on Mon 21st Apr 2008 01:00 UTC
shermamic
Member since:
2008-04-21

I felt I had to register to post on this topic.

Using c++ coupled with wxWidgets could be as productive as using Java. WxWidgets has a powerful string class, you can use all the STL containers and algorithms. And swing lacks some controls like the calendar control for example.

BTW if productivity is THE most important consideration, count how many Java lines you need to achieve the same result you can do using one line of code in Python:

print open (file).readlines ()

Reply Score: 2

RE: GUI applications
by suryad on Mon 21st Apr 2008 04:06 UTC in reply to "GUI applications"
suryad Member since:
2005-07-09

Hehe no one disagrees Java is not verbose but it does give you an amazing amount of control. Verbosity is something that needs to get rid off. I am no PhD but as far as I understand, Python is not really object oriented is it? Isnt that one of the main reasons why doing a simple thing as reading a file in Java often tends to result in a lot of boilerplate code? I am sure it is the same with C# as well...the other managed cde language.

Reply Score: 2

RE[2]: GUI applications
by kmarius on Mon 21st Apr 2008 18:14 UTC in reply to "RE: GUI applications"
kmarius Member since:
2005-06-30

I am no PhD but as far as I understand, Python is not really object oriented is it? Isnt that one of the main reasons why doing a simple thing as reading a file in Java often tends to result in a lot of boilerplate code?


Python is object oriented. The "open" function returns an object

"print open (file).readlines ()"

This could be written as

file = open(filename)
lines = file.readlines()

print lines

Reply Score: 1

RE: GUI applications
by evangs on Mon 21st Apr 2008 05:56 UTC in reply to "GUI applications"
evangs Member since:
2005-07-07


BTW if productivity is THE most important consideration, count how many Java lines you need to achieve the same result you can do using one line of code in Python:


And then you start wondering what happened to "teh snappy". Java is usually a good tradeoff between performance and productivity. Python is really good for knocking up a quick application, but it's horrendous when it comes to performance.

Reply Score: 2

This has happened before...
by google_ninja on Mon 21st Apr 2008 02:32 UTC
google_ninja
Member since:
2006-02-05

Honestly, the author is WAY behind the times.

The first big "The Sky Is Falling" incident was pure assembler to the more portable C. C executed WAY slower, and the executables were WAY more "bloated" then pure ASM. But, it was judged that the maintainability and portability of a higher level language VASTLY outweighed the benefits of pure ASM. The hardware quick met the challenge, and now nobody in their right mind would suggest doing a big project in ASM.

Then it was the rise of the managed languages. It has been determined that the stability, maintainability, security, and portability gains we get from managed languages VASTLY outweigh the negligible overhead of the vm. This is still debated in some quarters, but in most domains the discussion was over years ago, and the only places that still work in C++ are where performance outweighs every other consideration, just like what happened to ASM years ago.

What the author doesn't realize is that we are at the beginning of the third major revolution, which is the rise of the dynamic, domain specific languages. People are realizing that lines of code are really a chain around our necks, and that the further performance penalty of using a dynamic language is outweighed by their ability to be more flexible and succinct. Not only that, but the barrier to create a relative high performance language has dropped very low, and we are seeing a rise of new languages that hasn't really been seen in the industry before (or at least outside of universities and taken really seriously).

This guys opinion piece would have been alot more relevant 5-10 years ago when people were still talking about this.

Reply Score: 4

Comment by Soulbender
by Soulbender on Mon 21st Apr 2008 03:21 UTC
Soulbender
Member since:
2005-08-18

have to install Windows XP or Windows 2000 first, using my originally purchased Windows 95 or Windows 98 discs, before I can then install Vista, for example.

And, that’s the subject of this article.


So the subject is that he deliberately makes things harder than they have to be? What sane person installs Vista using Upgrades from Windows 95?

And all because the real software developers out there, the ones who can program in assembly, or C or even C++ (though even with C++ you begin to lose so much over C, about 10% in performance or more), are a dying breed. They’re dinosaurs.


Someone cant keep up with new developments and is feeling threatened.
Seriously, real developers? What about punchcards? THAT is real programming my friends.

We need people who are willing to spend the extra few weeks or months it would take to develop and maintain code using something that’s close to the machine.


Extra *few* weeks or months? How much experience did this guy have again? Writing a complex application in "something closer to the machine" is not going to take a few weeks or months longer, it'll take much more.

I challenge everybody out there to dig up an old copy of Windows 95 or Windows 98, and install it on your machine and note how much faster that user interface operates than the modern ones today.


And Xtree Gold, Norton Commander and even DosShell is faster than the Windows UI. Welcome to inaccurate comparisons.

that truly usable system for the sake of higher graphics and ribbons


Usable != fast.

In fact, if anyone would like to fund me for such a challenge, I’ll stand up personally against anyone in this business and prove to them that with the proper initial base design that the entire thing being done in these bloated emulated environments can also be created using the older models, and with greater speed, flexibility and efficiency even.


Rewrite Oracle 9 in assembly.

Edited 2008-04-21 03:22 UTC

Reply Score: 3

From a simple user's perspective
by arkeo on Mon 21st Apr 2008 06:02 UTC
arkeo
Member since:
2008-04-21

All these comments about different kinds of programming languages have been very informative. But from a regular user's point of view, they are pretty much irrelevant (no disrespect).

The fact is, you buy a laptop, say in 2004, pre-installed with XP, it falls from your hands in 2008 and you brake the screen--so you buy a new one, pre-installed with Vista. It feels slower. Cooler, very aesthetically pleasing (and I'm a former Mac user). But sluggish. And there's more than just the way it "feels". You can't play Diablo II anymore, because it just isn't fast enough (and I'm talkin' about a game released what, 10 years ago?). And by the way, the *new* laptop has more than twice the processing power (Athlon vs. TurionX2), a newer GPU (NV 5x00 vs. 7300) with 8x as much memory (64MB vs. 512MB), and 2GB of DDR2 vs. 512MB of DDR as system memory.

Now: why? I'm no programmer but I'm not a fool: I don't need a virus scanner, never had a virus in 4 years of Windows (Firefox and a little common sense), I regularly deactivate most of the apps that start up at log-in, never needed an anti-spyware. I run SETI with Boinc, but that's something I'm very well aware of, so when I need a little extra juice I shut it down. But I still don't get it: why is this new "monster" laptop of mine slower than the older one (it's just irony: both around € 1,200 - 1,500 at the time of purchase)?
Yes, I know that OpenOffice has to start from scratch because I deactivated the auto-start, and so does QuickTime, and iTunes, and Picasa, etc--I'm not talking about applications' startup times, just the way the OS reacts. Basic operations, everyday tasks, the occasional 10-year old game!
Am I asking too much?

I had to get rid of Vista and install a separately purchased copy of XP. Feels like being back home again. Just like when I turn on my old "toilet-seat" iBook running System 9 (I know it's called MacOS since version 8 =)

So, I suppose this affects everybody, Win or Mac or Linux users: why do we need better hardware to do the same exact things? I suppose this was the whole point of Mr. Holwerda's post.

Still looking for an answer myself, I just wanted to contribute with my doubts...

arkeo

Reply Score: 2

One Word People....Delphi
by snorkel2 on Mon 21st Apr 2008 06:20 UTC
snorkel2
Member since:
2007-03-06

Want C# and Visual Studio RAD but with native code? Look no further than Delphi. No VM or runtimes, just a self contained exe that will just run on any Windows PC.
http://www.codegear.com and if you need cross platform you can always use Free Pascal and Lazarus http://www.freepascal.org.

Reply Score: 2

RE: One Word People....Delphi
by WereCatf on Mon 21st Apr 2008 06:35 UTC in reply to "One Word People....Delphi"
WereCatf Member since:
2006-02-15

Want C# and Visual Studio RAD but with native code? Look no further than Delphi.

I have used Delphi for a few casual Windows apps I needed, it was pretty good all-in-all. Syntax is close enough to C that it took just a few minutes to get going and the RAD helped quite a lot. And that was several years ago, I'm sure it has gotten quite a few enhancements over the years. The downside to Delphi is that it is not cross-platform. I would love to have a good IDE/RAD for C/C++ or similar for Linux, I just have trouble finding any. I have tried Anjuta, yes, but it feels like a total mess and every time I have tried it it has been very very unstable.

Reply Score: 3

RE[2]: One Word People....Delphi
by elrod on Mon 21st Apr 2008 09:06 UTC in reply to "RE: One Word People....Delphi"
elrod Member since:
2006-11-15

And that was several years ago, I'm sure it has gotten quite a few enhancements over the years.

I have very fond memories of using Delphi 1-5. After a long pause (using C++ and Java) I did again a big project with Delph last year.
The Delphi IDE has become fat and very slow, the code completion is unreliable, the class documentation is a joke, and the worst thing: the standard library has barely changed in the last fifteen years. No collection framework, TStrings (and descendants) still is your (only) friend.

Reply Score: 1

NeXTStep is a prime example
by tyrione on Mon 21st Apr 2008 07:05 UTC
tyrione
Member since:
2005-11-21

of a beautifully functional and unobtrusive Interface that was fast, even on pitifully slow hardware, and if you put it on modern hardware [assuming the device drivers were available] it would literally fly.

We have spent an awful lot of cycles entertaining and distracting ourselves in computer systems rather than becoming more productive and thus afford ourselves more free time to live our lives, away from computer systems.

Reply Score: 3

RE: NeXTStep is a prime example
by arkeo on Mon 21st Apr 2008 07:41 UTC in reply to "NeXTStep is a prime example"
arkeo Member since:
2008-04-21

assuming the device drivers were available it would literally fly


True of any of the old Operating Systems: System 7, Be OS, NeXT, I suppose Win2k and Yggdrasil Linux as well, etc... If that weren't the case we'd all be running some of those, don't you think?

arkeo

Reply Score: 1

RE[2]: NeXTStep is a prime example
by tyrione on Mon 21st Apr 2008 09:16 UTC in reply to "RE: NeXTStep is a prime example"
tyrione Member since:
2005-11-21

"assuming the device drivers were available it would literally fly


True of any of the old Operating Systems: System 7, Be OS, NeXT, I suppose Win2k and Yggdrasil Linux as well, etc... If that weren't the case we'd all be running some of those, don't you think?

arkeo
"

It's kind of a sick twist on reality that we abstract layers on top of layers to make our OS's look more sleek and sexy, yet application interaction [Services in NeXTSTEP/OS X] seem to be less cooperative than they were a decade prior.

I prefer a collection of focused, small applications that have an open API standard to interact and when combined allows one to automate and leverage their needs more rapidly.

These large App Suites are a bit odd on Unix/Linux. I expected them on Windows but not under a set of operating systems that were designed to leverage the approach less is more.

For instance, instead of a series of small Graphics applications to duplicate what the monolithic Photoshop does we have applications attempting to cover 80%/90% of Photoshop in a single application.

What attracted me to NeXT was their Services between applications that could really make one get a lot of work done and do it both intelligently and rapidly with highly professional output results.

OS X is slowly turning back towards it's roots and once Carbon is gone I can see it more clearly, but it's been a real disappointment having to wait a decade for this to become reality.

Linux has a real shot but then again with GTK+/Qt and the competing GNOME/KDE wars we've had less leveraging of the best of both worlds and more of mere co-existing.

GNUstep is somewhere floating in the ether in the X11 world. Too bad their isn't a unified Services API to work between the three and leverage them simultaneously without it being a royal pain.

GTK+/Qt are doing a lot of work making stuff less difficult.

I like the options, but would love to see more efficient reuse between platforms.

We all have mixed install systems of GTK+/Qt and GNOME/KDE and a small group of GNUstep, not to mention other systems install base.

They all run under Xorg, but they definitely have never bothered to see how to leverage one another.

Reply Score: 2

coding habits and knowledge
by l3v1 on Mon 21st Apr 2008 08:50 UTC
l3v1
Member since:
2005-07-06

Reading through all this, I have to say I feel what the original writer feels. And very often.

I tried several times to put into words why this might be happening, but I always give up since I only can see this much of the picture. But thing is, many of these "modern" coders [i.e. surfacing in the last few years] just lack a lot of experience, practice and knowledge that would make a good coder great. Very many of them just don't have the know-how of creating greatly running code, and they leave very much to the hands of a vm, a framework, a compiler, and none of them is good enough [yet (?)]. Many of them don't care or can't do anything about optimization, and generally they are not even expected to. Usually there's not even time [allocated] to do them.

A few days ago, I was very pleasantly surprised by a student [who came to work with us a while back and was given various coding jobs] who came up to me and showed some really good code modifications he's been doing on some of our code and almost doubled its speed. But this was a sole occasion, and only one guy. Such a thing never happened to me in the ~8 years which I've been actually working [and I come in contact with lots of coders, students and fresh graduates].

Well, probably every generation of coders thinks his one was the last good one and the new ones don't know a thing ;) I don't share this opinion, I just think the new fellas have a different kind of knowledge, one that is not necessarily a carefully chosen one. But this is not their fault, not by a long shot.

Reply Score: 3

My .02
by TBPrince on Mon 21st Apr 2008 10:13 UTC
TBPrince
Member since:
2005-07-06

The author has a point but he doesn't go deep into technical details to understand reasons why. If you start a technical debate, you can't avoid going into technical details.

Thom was right when he said that Windows95 might be faster but only because it's simpler and it doesn't do all things Vista (for example) does.

Reason is back into early 2000s computers started to have enough horsepower to perform most tasks without the need to go at full speed. Simply, we have more horsepower than we mostly need.

Do you remember the days when listening to an MP3 was something which you could only do when no other application was active? Those days are over and now you can perform tens of other tasks while keeping your MP3 player on. That's the core of the problem.

Any old software would perform much better than modern one using modern computers, simply because they were much simpler as designed to run on a much slower hardware. The (easy verifiable) proof of it is no modern software would run on such old hardware in a way which we could think it's acceptable (for a significative software like Excel or Word or something like that, but just try to run your favorite MP3 player on a 1990s era PC...).

The thing is, at a certain point, we found out that we had horsepower in *excess* for most tasks and started to switch from a performance-oriented development to a functionalities-oriented one. The same way we switched from perfomance-oriented languages (like C++ and even ASM) to goal-oriented languages (easier development, more richness of functionalities, integration with other software, better security and so on). That's the reason why almost no-one develops software using ASM anymore: simply it's not worth the pain as now we haven't constrains in performance so we just want to have more functionalities.

It's a normal evolution in hardware and software.

However, it's true that industry tries to push you to buy new hardware while you could use older hardware in a more efficient way. It's also true that most users don't actually need newer version of software (I think millions of people could easily use Word95/97 without any need to switch to 2007...). But yet, having latest hardware became a status-symbol and a consumist society pushes you to use latest products even when you don't need it. And moreover, software houses always try to push you to buy newer versions by stopping to support elder ones.

But that's not developers' fault: Capitalism is just this plain stupid.

Reply Score: 2

RE: My .02
by REM2000 on Mon 21st Apr 2008 10:46 UTC in reply to "My .02"
REM2000 Member since:
2006-07-25

I agree i remember in 98 running a P75Mhz with 16MB RAM and Windows 95 OSR2. You could run an MP3 with 75% CPU utilisation.

Fast forward to the future, on my iMac i was copying 70 GB onto it, my macbook was copying 30GB from it. I had iTunes running playing a tv show, whilst compressing some video. This whilst dashboard, mail and other small apps were running at the same time. No slow down i was able to switch between them. It still amazes me how far computer have come.

Reply Score: 2

RE: My .02
by rcsteiner on Mon 21st Apr 2008 21:06 UTC in reply to "My .02"
rcsteiner Member since:
2005-07-12

What if my favorite MP3 player was Z! (a text-mode MP3 player that absolutely flies on 90's hardware)? :-)

Reply Score: 2

RE[2]: My .02
by TBPrince on Mon 21st Apr 2008 21:45 UTC in reply to "RE: My .02"
TBPrince Member since:
2005-07-06

Ah! You were obviouvsly out of context :-]

Seriously, we all tried all sorts of tricks to gain some CPU cycles and let us use more than just our MP3 player. But then those times are over.

Did it ever happen to you to try to run an old old software on newer hardware only to find out that program was unusable because it was TOO FAST to be used? That's the paradigm ;-)

Reply Score: 2

End user point of view
by Anacardo on Mon 21st Apr 2008 10:15 UTC
Anacardo
Member since:
2005-10-30

I'm no coder, and therefore I can only express the point of view of an end user.
I understand all your arguments, and still I cannot but remain a little puzzled when I hear things like "those Oses did a lot less than what we have today". While this might be definitely true, I cannot but think that most of these features are not needed or at least seldom nedeed. Some simple examples on WinXP: 1) without a central update application, every damned app (adobe, Java, Google, Apple) installs its own updater that run silently in the background. If this is not a waste of resources I don't know what it is. 2) Explorer looses tons of time trying to fetch additional information from ".avi" even when instructed not to do so. It gives you unnecessary information about installed programs, Hds running out of space, and other things like that. Surely you can disable most of these things, but I wonder whether these "features" should have been added in the first place. And it goes on and on... I have dozens of services running on my machine, some of which are supposed to "speed up the launch of a certain application". Maybe without all those services my applications might launch even faster. Anyway the perception of an uninformed end user is: current OSes and apps are bloated with unnecessary burdens, where features are added mostly following some "why not" scenario instead of answering the real needs of the end users.
And yes, Windows NT 4 is a monster of speed compared to XP on the same hardware, and while it does less, it's yet to be seen if it really lacks behind on useful features instead of bang and whistles.

Reply Score: 2

Newer things works better...
by ciplogic on Mon 21st Apr 2008 12:29 UTC
ciplogic
Member since:
2006-12-22

I take as comparison the three main platforms that are today:
- Windows
Windows is right now not dependent of the BIOS (the NT kernel) so it will not wait a crappy DOS principle to write on your hard-drive. Secondly the caching is much more better, and is not like old smart-drive. Right now, Windows has no memory leaks (no major and are really hard to find), respectively the all base-code is optimized using compilers that use the branch-prediction of the CPU, the cache size. If one application has one infinite loop, it will not freeze your system, like start reading from a CD.
- Mac OS X: they took the OS and optimize it iteratively. Right now everyone knows that Leopard will not work on a non SSE2 CPU, because most heavily used operations are optimized
- Linux: the 2.6 kernel is the most scalable kernel, with improved threading support, most scheduling algorithms are O(1) (take a constant time for most operations)

Java, Gcc, .NET always improve performance over versions.

Why do you take the experience as sluggish? When a company makes software, take in account the resources that are available. If I had today a quad-core machine from AMD, that costs 250 dollars, 2 G of RAM, a 64 bit machine, optimized libraries. My question is: should I write the code taking in account that will run on a 486? Should I keep local variables for everything and make stupid caches, to gain 15% performance?

Here is not about price, is about what is the target. How much cost a computer? Let's say 500 euro, an entry level one. How much cost the software? Most of times, more than the computer itself, so the user will point features, not ultimate speed, in a bash-like terminal.

If you care about speed, do you disable themes, setup a lower resolution, stop all security services, and use WinAMP 1.x ?

I would not, I will prefer Banshee or ITunes, and wait a year till is optimized more.

I will put one question, how the industry will grow, if I will have 4 cores, and the OS takes 1% of it 99% of time in my desktop experience, and does not offer nothing more?

Reply Score: 1

major86
Member since:
2008-04-21

I don't see any problem in using Java/C#. Yes, there are some performance issues but you getting a great number of libraries and can become quite efficient. Efficiency is the only real point in it all. Programmer need to do as much as he can at any given time. In the end one need to feed himself. But really, programming language is just a tool and if it's acceptable to wait a few seconds with database client written on Java, why would you use ASM instead?
About overall performance in Windows:
The problem is legacy support and some architectural decisions. Everyone can see how much more stable OS became since going NT-way. But MS can't leave legacy software and that's why the whole system going sluggish. Apple was able to do so, when they switched to MacOS X (and cocoa) and migrated to Intel. MS can't do so that easily, because many business partners of theirs use some really old stuff. The only escape for MS is to forsake theirs past and start anew.
P.S. I think that optimal performance can be achieved only if we optimizing for only one system (like closed systems: Macs/XBOX/PS/Cell phones). It's the only way to get all we can from the system.
----Sorry for my English.

Edited 2008-04-21 15:30 UTC

Reply Score: 1

random comments
by Rugxulo on Mon 21st Apr 2008 16:40 UTC
Rugxulo
Member since:
2007-10-09

I disagree with some points here:

* "optimizing is easier for unified platform": yes, but we need competition (and Mac OS X isn't "free" enough to qualify), plus who wants to be locked into hardware? If freedom isn't an issue (but it is), use Intel or MSVC, they are better than GCC (esp. vectorization). It's probably better to support more than one compiler anyways. BTW, lots of people still use GCC 3 when GCC 4 is much better (in most ways).

* "... runs faster b/c of SSE2 (required)": you should always have SSE1/2/3/4 checks at runtime to use if found with a generic routine otherwise (but most don't do this, however).

* "backwards compatibility slows everything down": who wants to throw all their old apps away? Some programs have no equivalents. And not everybody wants to buy separate OS licenses just to manually install (ugh) and run it in a VM (besides, Vista ain't nearly that backwards-compatible anyways, and it still could be improved). Also, just (luckily) having the source isn't enough, some old sources won't compile anymore!

* "buy more RAM, it's cheap": if more developers would use normal machines (and not 4 GB RAM behemoths), then we end-users wouldn't have to suffer from their bloat ... if you don't experience it first-hand, you won't care how it operates on a slightly less-new cpu (not everyone will be buying a new quad core anytime soon). I'll live with "10-20% slower", it's the "upgrade your RAM, HD, cpu" mantra (every freakin' day!) that irritates me. Ideally, since apps usually have to coexist with others, no one single program should use more than 1/10 of a typical user's total RAM (512 MB ???).

* "assembly is dead": no, 100% assembly isn't for everything, but it is extremely useful when you know you can do it better (and it can help A LOT, even in a majorly C++ project), but we are all limited by skill and motivation more than by time. BTW, don't forget that compiler writers themselves often need to know assembly in order to efficiently target a cpu (esp. regarding SIMD). So, any benefit you see from HLLs has to be attributed to the modern asm knowledge that built it. Sometimes you really really really benefit from assembly. And sometimes the stupidity (no offense) of braindead HLL compilers can make you sick.

Moral of the story: If it ain't broke don't fix it. (But things can always be improved!)

Reply Score: 3

RE: random comments
by major86 on Mon 21st Apr 2008 17:16 UTC in reply to "random comments"
major86 Member since:
2008-04-21


* "optimizing is easier for unified platform": yes, but we need competition (and Mac OS X isn't "free" enough to qualify), plus who wants to be locked into hardware? If freedom isn't an issue (but it is), use Intel or MSVC, they are better than GCC (esp. vectorization). It's probably better to support more than one compiler anyways. BTW, lots of people still use GCC 3 when GCC 4 is much better (in most ways).

I mostly agree with you, but I still like Mac. It doesn't matter that it isn't "free". I'd like if PCs had the same model as we have with consoles: relatively the same hardware for 3-5 years with just a little improvements. I know it sounds silly and old-fashioned, but it gives a more "clear" environment, performance and stability increase. Sadly it can be done only in embedded sector. I can't forget Psion's PDAs with the 33 MHz ARM chip that could run portable versions of Word and Excel.

Edited 2008-04-21 17:18 UTC

Reply Score: 2

RE: random comments
by major86 on Mon 21st Apr 2008 17:58 UTC in reply to "random comments"
major86 Member since:
2008-04-21


* "backwards compatibility slows everything down": who wants to throw all their old apps away? Some programs have no equivalents. And not everybody wants to buy separate OS licenses just to manually install (ugh) and run it in a VM (besides, Vista ain't nearly that backwards-compatible anyways, and it still could be improved). Also, just (luckily) having the source isn't enough, some old sources won't compile anymore!

Yep, but what about running it all this way:
Make the system highly customizable and create profiles which associate with relative services. For example if I run profile "Gaming" the system runs only services needed to run games. If I want to run Word it changes to profile "Office" (unload services needed for gaming and load ones required to run office suit). With this system we can build a compact core which will be common for all profiles. Make each profile to leave a very small footprint on the whole system and here you have it.
This model makes it much easier to optimize for a giant like MS. The whole Xbox 360 "OS" is around 50Mb. It's a sole prove that MS can do some amazing stuff. Windows 7 is supposed to be modular, but I doubt that it would be that much of improvement over Vista.
thx.

Edited 2008-04-21 18:17 UTC

Reply Score: 1

RE: random comments
by ciplogic on Mon 21st Apr 2008 18:16 UTC in reply to "random comments"
ciplogic Member since:
2006-12-22

* "buy more RAM, it's cheap": backward compatibility is crucial: I will keep the old code as much it apply. Think to WinAmp themes, let's say you like WinAmp 2 themes, for sure you will want that WinAmp 6 to not remove the support for them. But will be other users which likes WinAmp3 themes, which have floating layout. So WinAmp 5 have the capability of offering bloat code inside it, you either need code from WinAMP2 ,or from 3, but for sure not from both. Of course, WinAmp 6, may introduce their own themes capabilities, let's say the 3D ones, using Vista capabilities, but the problem remains the same, you will need only one.
What I wanted to say: bloating is OK as much the application is equal with precedent application but offers still new capabilities. So cause has more code, of course will ask more RAM. And about RAM price, it costs like 400 euros (with VTA, in Europe) for 8G of RAM. Don't get me wrong, but adding RAM, is the best way to handle more things that applications and extensions of one application consume.

* "assembly is dead": YES. I tell you as a developper. The single case where you may use it is in JIT (Just In Time) and in compilers technology today. The essence of assembly is when you want to gain some extra-speed in some scenarious. But in most cases, you gain much less than the compiler itself. The compiler can align your data, make agressive inline based in PGO (profile guided optimizations) that you may never see. Another issue of assembly is: is error prone. As old plain C. Handling with pointers, or worse, with registry, makes you to do: stack overflow, untake some cases, that are not so visible, you will not write a code that care of exception handling, and you will get a unreadable code, that will affect you when you will have MMX code, let's say, right now exist SSE3 (or 4), Gcc knows how to vectorize for it, but it cannot do it at the level of the assembly code, at least will need the C code. Instead, C# or Java generated code is better right now than a code generated by any assembly based code in year 2000 compilers.
So no gain, only a boat anchor you will need to throw it out once.

Reply Score: 1

RE: random comments
by evangs on Mon 21st Apr 2008 18:29 UTC in reply to "random comments"
evangs Member since:
2005-07-07

you should always...


That's what your post boils down to. "You should always..."

In a production environment, you'll know that trade-offs are always made to get a product out the door on time and within budget.

Reply Score: 2

RE[2]: random comments
by rcsteiner on Mon 21st Apr 2008 21:07 UTC in reply to "RE: random comments"
rcsteiner Member since:
2005-07-12

Not if one is developing software for in-house use. :-)

That's one of the many reasons I've never considered working for a company which sells software as a product, at least not as its primary reason for being.

Reply Score: 2

rcsteiner
Member since:
2005-07-12

An operating system is generally composed of a few basic parts: a kernel, one or more shells to interface with end users (typicall a CLI or command prompt and a GUI or desktop), and one or more additional trusted components (e.g., "drivers") to aid the kernel in controlling hardware peripherals, filesystems, and other similar beasties.

In a sane operating system, multimedia elements such as photo management tools, music management applications, and the like are applications, not part of the OS, even if those elements are marketed as being part of (and are normally bundled with) a given platform.

In other words, the inclusion or exclusion of multimedia tools in a given operating system should not (by itself) have any impact on that operating system's general performance. None whatsoever.

Just because an older OS didn't come with Frank and Henry's Multimedia Wonderapp in the box doesn't mean FHMW can't run decently on said OS if properly ported to its native API. :-)

If things are slow in a given hardware context compared to another OS in the same context, look to the kernel, to the specific development framework(s) being used, or to the application architecture (e.g., are they using threading effectively?) for reasons for the performance issue.

I suspect that some older OSes (Windows 9x comes to my mind immediately) would fare rather badly in today's more demanding environment, but I also suspect that some would not.

More to the point: I would speculate that older kernels which were used to dealing with heavily multithreaded applications in the past (such as BeOS, and perhaps the OS/2 2.x and later kernels) would probably do fairly well on modern hardware even with today's multimedia applications and multimedia-heavy shells.

Such OSes were quite used to handling concurrent loads on much lesser hardware, and all today's hardware would do is provide them with more breathing room for doing what they were capable of handling in the first place.

Most of the issues we see in modern OSes can be traced either to poor application design (Only one thread? Or perhaps using one thread where five would speed things up significantly?) or to the arguably inappropriate use of certain types of heavy applications frameworks by developers who would rather use what they know than learn something more appropriate. And sometimes it's just bad (or lazy) programming. :-(

Reply Score: 2

middleware
Member since:
2006-05-11

Have we still not forgot the history regarding preemptive OS and coorperative OS? The coorperative OS has less overhead and thus systems like NDS were much more popular than preemptive OS like Unix or NT on PCs. But over time the hardware advanced and the performance transformed into security and maintainability.

Reply Score: 0

It's the coder, not the toolkit
by Touvan on Tue 22nd Apr 2008 15:48 UTC
Touvan
Member since:
2006-09-01

I just have to say, a good coder, who is familiar with his/her platform, would have no problem making even scripted languages like javascript or actionscript absolutely scream.

The problem isn't the environment or the language, it's the coder, and there are a lot of them, just trying to make it work, never mind make it work fast.

Reply Score: 1