Linked by snydeq on Thu 17th Nov 2011 22:47 UTC
General Development Fatal Exception's Neil McAllister discusses why code analysis and similar metrics provide little insight into what really makes an effective software development team, in the wake of a new scorecard system employed at IBM. "Code metrics are fine if all you care about is raw code production. But what happens to all that code once it's written? Do you just ship it and move on? Hardly - in fact, many developers spend far more of their time maintaining code than adding to it. Do your metrics take into account time spent refactoring or documenting existing code? Is it even possible to devise metrics for these activities?" McAllister writes, "Are developers who take time to train and mentor other teams about the latest code changes considered less productive than ones who stay heads-down at their desks and never reach out to their peers? How about teams that take time at the beginning of a project to coordinate with other teams for code reuse, versus those who charge ahead blindly? Can any automated tool measure these kinds of best practices?"
Permalink for comment 497646
To read all comments associated with this story, please click here.
RE: Comment by Luminair
by intangible on Fri 18th Nov 2011 17:41 UTC in reply to "Comment by Luminair"
Member since:

To his point a bit, I also find that a lot of organizations start to run down the road of "performance metrics" periodically and it always seems that we start spending more time working on stuff to "keep track of metrics" instead of the actual work itself.

If I'm spending between 30% and 200% of my time updating all the tools and systems with my breakdowns of how much time I actually spent working on tasks, it doesn't seem all that efficient to me... The alternatives: lines of code, bugs fixed, scm checkins, are all so easy to game, they're not really useful either.

Reply Parent Score: 2