Linked by Thom Holwerda on Mon 18th Dec 2006 18:34 UTC, submitted by anonymous
Java Performance testing is usually left for last in the application development cycle - not because it's unimportant, but because it's hard to test effectively with so many unknown variables. In this month's In pursuit of code quality, Andrew Glover makes a case for performance testing as part of the development cycle and shows you two easy ways to do it.
Order by: Score:
Member since:

These are neat looking tools, and I wouldn't be surprised if I end up using them on a project. However, optimizing early is still a bad idea, for the same old reasons. Unlike functional testing, where it's not acceptable to have calculations that are approximately right or features that work 75% of the time, performance testing is all a matter of degrees. Without well-specified performance targets, one could spend a lifetime optimizing just one application. On the other hand, well-specified performance targets are generally specified in terms of perceived performance for the end user, which means that you're basically running the completed (or close to the completed) application to evaluate it's performance. When you do this, you'll typically find some specific bottlenecks which, when resolved, yield massive improvements in the overall perceived performance for [often] relatively small investments. If you optimize too early, you end up expending effort optimizing code that, in the overall scheme of things, is immaterial.

The other problem with optimizing too early is that [good] development practice involves lots of functional refactoring, so if you optimize early, you're likely optimizing code that's going to go away anyway.

That said, writing these sorts of tests as you go does provide you with a useful tool for optimizing later on. I would just caution against trying to change your code based on initial test results. A much more cost-effective approach is to identify common coding patterns and practices that tend to improve performance up-front (and identify anti-patterns that hurt performance), and to reflect these in your coding standards. This way, your code just kind of ends up generally performing well, excepting those bottlenecks you discover during performance testing.

Reply Score: 3

stestagg Member since:

I get the feeling sometimes, that people take this 'don't optimize too early' mantra too literally, writing hugely un-performant (this is a word?) code in the belief that it will get optimized later. I think there needs to be much more focus on writing speed-efficient code from the start, and then applying optimizations on top of this. Of course, I don't control any budgets on any software projects. ;)

One thing that I really hate is the 'hardware is cheap' argument. It feels too much like an excuse for sloppy work to me.

Reply Score: 3

evangs Member since:

No. If you haven't got a program written, how can you know which parts are bottlenecks and need to be optimized? The short answer is, you don't. You could spend hours optimizing a particular code path which on paper looks highly inefficient (i.e. O^n) and turning it into O log n (i.e. highly efficient). However, that means nothing to the final program as that particular code path was called once in the entire running of the program and n never exceeded 5.

You're under the misimpression that slow code is "sloppy". What exactly do you mean by "sloppy" coding? Most of the time code is slow because programmers use the simplest and most straight forward approach to solving a particular problem. Is that sloppy? If you're developing a piece of software, would you rather debug code that is straight forward, or convoluted code that is "optimized"?

Premature optimization is the root of all evil. It's like the old saying. "Cheap, fast or works: Pick two". Most people are content with software that is cheap and works. If you want something that is fast and works, be prepared to pay. Or wait ... and wait ...

Reply Score: 3

v Tim Holwerdi
by Tim Holwerdi on Tue 19th Dec 2006 12:01 UTC