posted by Yamin on Wed 9th Sep 2009 16:17 UTC
IconI've been developing software for quite a few years. One of the issues that seems to come up again and again in my work is this concept of design and implementation. I recall it being a significant part of my education at the University of Waterloo's Computer Engineering program as well. The message was always the same. Never write code first. First you must design software by writing a design document, flow charts, pseudo-code, timing charts... then it's merely a trivial matter of implementing it. Make note of the attitude here given towards implementing. The real work is in the design, and it's just a trivial matter of implementing it. It sounds so simple doesn't it? Now, how often does this work out in real life?

Before I begin my analysis (read attack) on this attitude, let me first say that I am probably the last person that just starts hacking away at code. I really do believe in design and properly thinking about a problem first. So keep this in mind and for the rest of this article; understand that I am in no way advocating hacking as a legitimate way to write software. Let me also suggest that you read this article focusing on the mentality of the people in the business and how it works out in practice. Many of the terms I take issue with may seem reasonable from an abstract academic point of view, but this article in written from a practical perspective.


The Origin

I believe the origin of this attitude stems from older fields in engineering. The civil engineer designs a bridge and the construction workers build (implement) the bridge. The mechanical engineers design a car and the autoworkers assemble (implement) the car. So it was natural to try and overlay this well known idea onto the field of software. One designs software and then another implements. You can see the constant theme here. The implementers are thought to be like robots. All they need to do is follow the instructions in the design and you will end up with a good product.


The Philosophical Problem

The major problem with this is that ALL of software is design. 100% of software is design from the high level architect-like design to the low-level design of a for-loop. The implementers of software are not human! I knew you suspected as much given how odd many programmers are. No, the implementers of software are actually 'perfect' machines. They are the compilers (interpreters, preprocessors... are all included in the generic use of the this word). For almost all purposes, the compiler is perfect. I've yet to run into a situation where I've written code and the compiler has not followed my instructions and that is the reason something broke. It hasn't happened yet.

It is rather strange actually. It is as if people do not recognize the very thing the computer brings to the table. It gets rid of human implementers. It makes them obsolete. Thus, it can do the same task perfectly over and over again without error. Software is like a civil engineer having an army of robots capable of reading his design and perfectly building the bridge every time. What an amazing world that would be. Every screw is tightened to the exact specification. Every weld done perfectly. Every piece of steel cut to the exact precision. That is what we have in the world of software. The perfection of implementation. Yet, we do not recognize this. Rather, we have decided we cannot live in a world without human implementers. So we erroneously transferred this concept over to the field of software and created the notion of implementation where none is needed.


What is Design?

Yes, all of software is design. There is no implementation. Pardon me as I stress this over and over. There is only high level and low level design. To mirror other fields of engineering: A civil engineer also has high level design, such as choosing the type and shape of bridge. A civil engineer also has lots of low level design, such as choosing the kind of screws, where they go, where to weld...

All parts of the design are essential and are 100% design. So it is in software. The high-level architecture (choosing components, designing interfaces...) are all essential. So is the low-level design of individual for loops, error checking... I dare suggest most of the problems I deal with on a day to day basis are in problems in the low-level. Low-level software should not be dismissed as dummy work. This is the guts of a program.


The Real World Problem

Every line of source code is design. Software is the equivalent of the blue prints to a bridge. The only complete design is actual source code. Once you realize this, you will begin to understand why so many software projects go wrong. It is not enough to hand over a design document or specification to a code-monkey and expect everything to come out okay. The key issue here is to understand that no traditional design document or specification is complete. If it were complete, you would have been better off just writing the source code yourself.

After all, what is source code, but the specification of what the program should do? Most modern programming languages resemble English and written language enough that a well-written program reads as well as a specification. Modern programming languages are not cryptic or needlessly verbose like assembler. Programming languages have gotten so good and have gotten rid of so much of the fluff that they essentially have become a very good way to represent algorithms. As I look through my source code today, about the only fluff are the import or include statements at the top of the file. Well written libraries and proper design abstract the rest of the fluff.

Suppose I were to write a function that divides two numbers. I could either write a specification as follows:

Program Inputs: 32 bit signed integer A, 32 bit signed integer B
Program Outputs: A/B as an integer ignoring any fractional component
Error checking: If B is 0 then the program will throw an exception

That is what a properly written specification would be. I could of course have just written the source code myself as follows.

int DivideNumbers( int a, int b)
{
    if( 0 == b) throw new Exception ("Illegal divide by zero");
    return a/b;
}

Is the English specification any clearer than the actual code? I highly doubt it. You might as well just look at the source. Does the English specification provide anything of value that the source code does not? Combine this with the fact that if you need to make changes, you are relying on the specification and now you run the risk of having the specification be out of date with the source. Now imagine a program with thousands upon thousands of functions. Have you ever worked at a place where every function is defined in a specification to the detail above BEFORE any code is written? Of course not! Everyone recognizes the insanity that would be. It would be mindlessly repetitive to fully specify something in English and than translate that in source code.

Of course this even assumes it is reasonable to think a design can be perfected when written once. Indeed, software is an iterative process as is most designs. You design something, test it out, makes changes, rinse and repeat. Software makes this process amazingly simple given our debuggers which act is simulators. Certain fields in engineering have been done so much that standard designs make it seem as if they somehow possess more quality. We certainly aren't building many new styles of bridges for example. Yet given any new problem, all fields in engineering face an iterative process filled with bugs. We've been doing civil engineering for centuries. Somehow things like the Big Dig in Boston still cause lots of problems.

Table of contents
  1. The Problem with Design and Implementation, 1/3
  2. The Problem with Design and Implementation, 2/3
  3. The Problem with Design and Implementation, 3/3
e p (8)    82 Comment(s)

Technology White Papers

See More