Most software is rigid in nature, making it difficult to reconfigure and modify without costly upgrades. Can software be made more plastic or malleable? Stephen Morris demonstrates how aspect-oriented programming provides an important tool in the race to achieve plastic software. If IBM’s on-demand computing spreads across the industry, this requirement will become the rule rather than the exception. Will you be ready?
Doesn’t look very promising; I would imagine it would be almost as unreadable as self-modifying code. I don’t particularly like OOP or classes, either.
-bytecoder
Well I think for most people it’s hard to take your opinion seriously since you don’t like OOP. Really, it is in almost all cases better than procedural programming. Even the Linux kernel is OOP (OOP in C is ugly but possible… but then again what isn’t ugly in C?)
I’ve read about AOP before and I’m interested in it but not sure of it’s real-world benefit. I think it would really need a good idea that would indicate that code would be inserted here or else you could spend hours hunting bugs that come from code in other places.
Bite me! I’m not an OOP fan either. I’d make use of
namespaces, small and well defined functions as well as
functional programming paradigms before I drink the OOP
Koolaid.
For each problem domain I am trying to study, I may use a
combination of paradigms. Heck, I may even make up my own.
Trying to map every problem domain to OOP only results in
bloated over-engineered spaghetti looking code that our .NET
and Java brethren enjoy deciphering.
That’s why I have an aversion for programming languages that
try to shove down their favorite paradigms down my throat.
It’s like forcing everyone to write one flavor of legalise
English.
Yay for C
Why aren’t we blunt today You don’t seem to have considered that I might actually know what I’m talking about.
Really, it is in almost all cases better than procedural programming. Even the Linux kernel is OOP
Like when? I suppose the linux kernel could be considered OOP, but then again, almost anything can be. The actual definition of OOP is so absolutely loose as to be useless, which is why I define “OOP” to be the C++/Java style (and almost any other “true” OOP language).
In any event, I don’t have time to list all the reasons why OOP isn’t practical, but, if you give an example, I’ll happily show you why OOP isn’t suitable in that situation.
-bytecoder
The actual definition of OOP is so absolutely loose as to be useless, which is why I define “OOP” to be the C++/Java style (and almost any other “true” OOP language).
C++ and Java are strongly procedural. Neither can be considered a good example of an OOPL.
In fact, most of the issues people have with ‘OOP’ really are with the 3rd-classy-ness of the OO parts of C++ or even Java. Eiffel or Smalltalk may have their own problems, but they’re different ones.
Well, they all follow the same basic rules, which is what I’m referring to. I believe I know what you’re talking about (e.g. OOP like in ruby), but that doesn’t fix the core problems I see in OOP.
-bytecoder
“The actual definition of OOP is so absolutely loose as to be useless, which is why I define “OOP” to be the C++/Java style (and almost any other “true” OOP language). ”
Well Allen Holub in “Holub on Patterns” on pages 12 through 19. Gives a rather tight definition of what an “Object” is.
You might want to read the book.
Well, the word “object” can be interpretted in many ways. The possible meanings are so numerous that you might as well not even use it as all, e.g. a string literal in a C program could be considered an “object,” which means that almost every program on the planet is “object oriented.”
-bytecoder
“Well, the word “object” can be interpretted in many ways.”
Read the book and stop trying to be pedant. The definition is on page 13.
Well, the problem with that is I don’t have the book, so unless it’s available online for free, I can’t read it. In any event, I was merely giving my reason for why I define what I mean by “OOP,” such that there is no confusion (it’s happened before).
-bytecoder
Well, the word “object” can be interpretted in many ways.
just interpret it the OOP way
e.g. a string literal in a C program could be considered an “object,” which means that almost every program on the planet is “object oriented.”
oh, really?
just turned another story in a OOP flamewar, just stop it
Flaming – the act of posting messages that are deliberately hostile and insulting, usually in the social context of a discussion board (usually on the Internet)
– Wikipedia
Really? Well, unless you’re insulting me, I don’t know how I could have started one, since I was merely responding with my opinion on the subject (and not you, whoever you are).
-bytecoder
Maybe he was talking about a functional language instead… Like LISP. Lisp is very good, and you can expand it to be the way you want.
for some times now. Personaly I do not see it as a revolution as many don’t. There has been a few article covering it on the net. The main disavantage is that you cannot read litteraly “throught” the code anymore: in fact AOP is the paroxism of “spagetti code”: code can be branched (at compile-time) everywhere…
— bouh
That’s true, but can be easily solved with an editor by marking the places where code will be inserted.
I don’t like it because it, basically, acts as a workaround from having to properly design the code in the first place.
-bytecoder
I see people praising this every few months, but I still don’t get it.
I’m a programmer and I usually pick up programming-related stuff quickly, but I just don’t understand the purpose of this. I’ve tried to read tutorials, but they were all… weird.
Ok I ll give you a few word exemple.
OOP is usually seen as a top-to-bottom analysis: meaning that you wrap objects, you hierachize concepts, you inherit, … etc well you know I guess.
Now try to answer this simple problem with OOP: In my prog, I want to trace calls (and parameters values) of many critical methods through my prog. (Try to spend few second thinking how you will solve the problem with OOP or just basic function programing)
With AOP you create an aspect which “join” before your function execution. At compile-time the code of the aspect will be executed before your function. The compilator does the work for you, and your code is (supposably) cleaned.
AOP is transversal (or left-to-right) compare to OOP and combine well with OOP. I took the exemple of the log because this is one of the immediate application I saw for AOP. However as said before, there is a drawback: you do not see where your code branches and reading the code become more difficult.
One has suggest using an editor that highlight the branching. Is there such an editor, did somebody tried it?
Ok I ll give you a few word exemple.
OOP is usually seen as a top-to-bottom analysis: meaning that you wrap objects, you hierachize concepts, you inherit, … etc well you know I guess.
Now try to answer this simple problem with OOP: In my prog, I want to trace calls (and parameters values) of many critical methods through my prog. (Try to spend few second thinking how you will solve the problem with OOP or just basic function programing)
With AOP you create an aspect which “join” before your function execution. At compile-time the code of the aspect will be executed before your function. The compilator does the work for you, and your code is (supposably) cleaned.
AOP is transversal (or left-to-right) compare to OOP and combine well with OOP. I took the exemple of the log because this is one of the immediate application I saw for AOP. However as said before, there is a drawback: you do not see where your code branches and reading the code become more difficult.
Ever heard of a debugger? In any event, if code might need to be inserted later on, the original should be designed around that in the first place. If it wasn’t, you’re going to pay the price for not doing it; instead of dealing it out all at once, AOP simply spreads it out over a long period 10 fold.
One has suggest using an editor that highlight the branching. Is there such an editor, did somebody tried it?
I’ve never heard any. I was merely pointing out that the spaghetti code problem is circumstantial. Even with a special editor, though, it would still be harder to read and maintain.
-bytecoder
Ok I ll try to reply to your reply to my post. Pardon me if I am a bit out of scope, cause I am not a good english speaking person, and I must admit, I had a hard time understanding the point your comment:
“Ever heard of a debugger? In any event, if code might need to be inserted later on, the original should be designed around that in the first place. If it wasn’t, you’re going to pay the price for not doing it; instead of dealing it out all at once, AOP simply spreads it out over a long period 10 fold.”
Well I have troubles to understand if you are speaking of OOP code or AOP code when you say “In any event, if code might need to be inserted later on, the original should be designed around that in the first place”.
So I assume your point is: in any case if you do not do a good design you will pay the price. And anyway to trace you can use a debugger.
The problem:
Ok I will extend a bit the problem. You still need to trace some class. Your application must run fast, as it is time critical, and it must be 24h up, some kind of server app. One day it crashed, without core file (and anyway it’s not compile with debug info). It was very difficult to know why and so that what’s motivated you for the tracing.
What people do:
In functional/procedural programing, you usually include the tracing code at the begining and the end of the functions. Which is teadious and difficult to maintain (must search every occurence of the function…)
In OOP you can do as above. Or you are very good on design pattern, and you decide to use a Functor Factory, whose only purpose is to be inherited and to call your function. In the constructor and the destructor it will perform the tracing. This is an elegant solution, but since building a Functor is time consuming, you might want to use a Pool … teadious. ok let’s just include the code where we need it …
AOP gives a elegant way to solve this problem and without OOP overhead.
I read your posts and I saw you do not particularily like OOP either. As one said I see them as a mean to an end. I think AOP can be usefull, but it’s usefullness must be, in my opinion, limited to very few particular design issue because it as some drawbacks. I think this case is one that reflect AOP usage.
— bouh
Well I have troubles to understand if you are speaking of OOP code or AOP code when you say “In any event, if code might need to be inserted later on, the original should be designed around that in the first place”.
I was talking about AOP.
The problem:
Ok I will extend a bit the problem. You still need to trace some class. Your application must run fast, as it is time critical, and it must be 24h up, some kind of server app. One day it crashed, without core file (and anyway it’s not compile with debug info). It was very difficult to know why and so that what’s motivated you for the tracing.
What people do:
In functional/procedural programing, you usually include the tracing code at the begining and the end of the functions. Which is teadious and difficult to maintain (must search every occurence of the function…)
In OOP you can do as above. Or you are very good on design pattern, and you decide to use a Functor Factory, whose only purpose is to be inherited and to call your function. In the constructor and the destructor it will perform the tracing. This is an elegant solution, but since building a Functor is time consuming, you might want to use a Pool … teadious. ok let’s just include the code where we need it …
Well, to do all that in the first place you have to recompile, so you might as well just compile a debug release and use a debugger.
-bytecoder
From my experience, debuging really slow down an application. This is may be not important for GUI-related, but for a server deamon (for exemple) it is essential to be very responsive.
The crash can occur anytime over one month and that in that case the only appreciable information you have are the log and the core. If you debug, you have got a snapshot of what just happend, if you trace some critical fonction, you have in history which I experienced to be more usefull.
If you want to debug, ok, the discussion is close. If you consider logging different function randomly, then AOP is for this case a good tool, in my opinion.
Got any decent code examples? What’s provided in the article didn’t really help.
Consider the case where something isn’t working properly. You want to inspect it’s state a bit more closely. You could put lots of print-statements into it, exposing the value of variables as you go through. Or, you could attach a class which did the same thing using AOP, but you wouldn’t need to recompile your original class, or dirty it with loads of print statements (and risk heisenbugs).
My example also shows that AOP is really nothing new. It’s taking one of the principles of debugging and rebranding it as a more generic approach to expanding code. I like to call it ‘wallpaper-oriented programming’.
AKA “I haven’t mastered it, so it can’t be any good”.
yawn.
I probably know more about OOP than you do, mr. anonymous.
-bytecoder
love to point out “But OOP stuff is so much spaghetti code!” well you know what? Take *ANY* language or method and you can create spaghetti code with it! What you have run into is not that “All OOP is spaghetti” but “These people’s designs are spaghetti” and you then go on to generalize that OOP=spaghetti, which is a bunch of BS. A well-designed system is a well-designed system, regardless of implementation language and its native attributes. Spaghetti systems are the same way!
So, there are those that don’t know what they’re doing, and OOP doesn’t help them when they don’t know what they’re doing, but they still won’t really know what they’re doing, even with procedural or aspect oriented methods. There are a lot of patterns that are not language-specific that are non-spaghetti, and there are patterns that are spaghetti-breeders. One pattern that comes to mind as a nightmare for growing things over a period of time is the Visitor pattern, because it violates acyclic relationships in a system. I’m sure there are other patterns associated with OOP that are equally stupid for sanity, too, but such things can be done in any suitably powerful language if you really try.
We where talking about AOP, not OOP… Perhaps you should read what people say before refuting it
-bytecoder
s/where/were/. I don’t know what’s gotten in to me, that’s the second grammar/spelling mistake of mine in this thread.
-bytecoder
I don’t think the point is about being able to write spaghetti code in any language as much as how easy it is to end up with code that looks like spaghetti when working in some language.
In case of Java and C++ it’s quite easy (in particular in real world situations where the customer gives you a spec, then just as you finish implementing everything according to the spec, the customer changes the spec and gives you a tight deadline – mix into that a group of other developers who may or may not have similar views on programming as you do and you end up with spaghetti code). It’s also quite easy with C.
It’s a bit harder to have spaghetti code in languages such as Lisp, Scheme, Smalltalk and Self and harder yet in SML, Caml (as in a subset of OCaml without the OO extensions) and Haskell.
I have to say I don’t really “get” AOP either. The few examples I’ve seen all seem to focus on logging which seems to be the perfect AOP example. How about other uses? AOP just seems like a fancy new name for a pasta maker to me.
And since Aspect-Oriented programming is a patented technique [http://www.pmg.lcs.mit.edu/%7Echandra/publications/aop.html], basically nobody can legally use it unless you’re a personal friend of the inventor.
So, who really cares if its theoretically any good, when legally it is worthless?
[Minna Kirai]
I find it interesting how Aspect-oriented Programming strongly resembles Objective-C style Categories (References http://www.toodarkpark.org/computers/objc/moreobjc.html#756
here and http://en.wikipedia.org/wiki/Objective-C#Categories here. Based on those references, you can see that all the mechanisms required for aspect-oriented programming are in place.
– Dennis C.
Well I do not know about Objective C I must admit I never used it, so I was very curious about reading the wikipedia page.
Honestly, Dennis, I do not see which is the mechanism you are talking about that replace AOP? Categories does not seem to be aspect at all, something compleately different.
— bouh
Ok well 1st of all, none of you really get it right when it comes to programming paradigms.
NONE is a goal in itself. This is the problem of java for example that it forces OOP even if simple one-liner procedure would be enough. On the other hand, emulating OOP with some ugly typedefs/records and global procedures is hacking too.
The best one can do is understand all these things as A MEANS TO AN END, not as an END IN ITSELF.
I myself know and use only Procedural and OOP programming and I mix them (and they mix very good). AOP seems a bit strange to me because I don’t reall understand it but unlike some middle-age idiot I’m not going to flame it for my fear of the unknown.
I’ve never used functional programming ether basicly but I KNOW firsthand that functional programming can be very beneficial especialy in code cleanness and speed for certain problems(see shootout).
To those who think Java and C++ are “true OOP languages” I’d sugest reading “Object-Oriented Software Construction” by Bertrand Meyer. The book has Chapters on “OOP in non OOP languages” as well, for those who wondered about the C can be used in an OOP context.
The only rule that really makes sense when it comes to choosing languages and the like is pick what you (or the team in most cases, but then there’s also mamagement so that’s another story) feel comfortable with and keep in mind that the code will change over time. Oh and of course “Less whining, more coding”
Both have their place. Neither is overall better then the other.
Personally I find myself leaning toward OOP programing these days, but I always keep my code as simple as possible. I also minimize my use of inheretence to ‘only when needed’. Seems like many people create object trees for fun. Also I think in general Java programers are overly Interface happy.
Both have their place. Neither is overall better then the other.
That’s your opinion, and I respect that; however, I do believe you are wrong. Whenever a controversial argument occurs, there are always people that say “neither is better,” but that can’t even be determined! Neither OOP or procedural can be claimed to be better unless some fair studies are performed.
I don’t like OOP because I don’t find it particularly elegant or powerful and how incredibly over-hyped it is. To me, OOP is a monolithic kernel and procedural/functional are micro.
-bytecoder
As Fred Brooks said in “the mythical man month” to much abstraction is a bad thing.. AOP is to much abstraction over time..