My physics teacher likes to say that physics like to make problems they face look like ones that they know how to solve. A simple harmonic oscillation was one he frequently used in class, as is presumably the case in physics in general.Anyhow, why they do such is simply so that you can solve a problem to meet homework deadline, get a novel prize or whatever.
In computer programming, I think, there is a similar case as well. A newbie, as he learns programming for the first time, quickly realizes that he can and need often use a common pattern when writting code. For example, in dealing with a set of elements, handing each item at a time iteratively is a quite common technique. If not in procedural programming, one but programs in functional programming, still, recursively calling a function with unfinished items and results is almost only way to go. You hardly get surprised: everyone solves a problem in the way you are so familiar and so knowledgable. Extreme programming works nicely because most of the time, what you have to do is code and test things very very carefully rather than solve problems you don’t know how to solve. You have to create a text editor? Then immediately you expect you are going to make a program with a big data buffer that is modified as the user wants and GUI components displalying the content of that buffer. MVC is but a natural consequence.
The situation changes when you start programming using OOP, reusable components, concurrency, Haskell or anything looking unusual. Code become a manifest of the world view, philosophy about computers. The creators of Next thought it is insanely great but not many people, many enough for being commercially feasible, shared the view. Smalltalk was meant to be intelligent tools for all of us humans, but it simply did not solve our problems for computers: number cranking. Then we ask why? The following are reasons I suspect might be a case.
* No one knows what is OOP; it’s not like math we all have struggled for years, nor is von Neumann machine that every computer user is familiar with (we cannot do real programing for the turing machine). The claim that OOP modeling the real world is rubbish; diamond inheritance, subclassing, the principle of subtyping, danger in down casting, etc, etc, cannot be found in everyday lives.
* Concurrency is a fantasy; After all, computers do one thing at a time. Trying to make things look like happening at the same time only leads programming to the hell of scheduling operations, resulting in a profound fear that someday the program might halt all of a sudden due to deadlock.
* Reusabiliy, once seen as a key to solve never-materialized software crisis, is a fallacy of false analogy. Making software is not quite the same as manufacturing automobiles. A software component can be copied without any cost. The term reusability and scenarios it tells us will not happen. We need a different jagon.
* Haskell, despite the affection of computer scientists, is miserable failure. Lisp and its derivatives like Scheme are ones that are used largely and taught in school as a general functional programming language. Haskell may improve productivity dramatically but if non-geniuses cannot use it, it will remain as a toy not a tool for real programmers.
* Prolog? AI? Genetic algorithms? Voice recognition? Agents? …. Who has any clue about them.
In other words, this is the core of reasons why script programming; C++ style OOP, as oppose to Objective-C style or SELF-like style; UNIX-style IO operations, as oppose to Java-style; are so popular. As the creator famously noted, Perl was designed to solve real problems for real programmers. C is still a favorite of many real programmers. Computers cannot be more than computers. And so we must try to solve problems in the way we know, or the way that was used to because the architecture of computers is still the same today as decades ago.
The article tried hard, but it cratered.
And just for the record, a recursive function is really, really bad, especially on an unbounded list (unbounded = who the heck knows how big it is at runtime). That’s a good way to whack your stack and core dump/crash.
It’s this kind of thing that’s the problem with the article: there’s no depth on the other side. Don’t use goto? Well there are perfectly good reasons to use it…but if you’ve never encountered them then in your worldview goto is bad. OOP? It’s good for some things, not for others. And in any case it’s useful just to organize all that C code (wrap it). More importantly, it helps in standardizing interfaces and access to your data structures, something important in multi-programmer environments. A big buffer for text? How naive. What happens when you try and open that 85gb file? Barf.
The problem with computing is that everything is still relatively new, and it’s still too macho. Visual Basic has its faults, but it’s not much more than a bunch of components strung together – the holy grail of computing. It’s also denigrated by most ‘real’ programmers. A good VB guy can write something that performs to spec in a day, and a C/C++ guy would take weeks and it still wouldn’t match the requirements. It’d be faster and smaller, though. What can you do?
As for the other stuff, well, if you don’t understand the problem space, then that’s your issue. Prolog, amazingly enough, has real-world uses that would fry your mind if you tried to do it using normal procedural languages. Whatever.
The main problem with programming is the people that write the software, not the box, the architecture we use, the endian-ness, etc. It’s starting to change now that everything is high-speed, but it’ll be another few years before everyone goes high-level (high-level meaning scripting/VB style).
This article started out interesting and then in the last paragraph or two went into the weeds.
He first talks about how we should break problems down to it’s simpliest components (e.g. his example of harmonics) to solve a problem. Then he states that OOP is impractical because it “cannot be found in everyday lives“. Isn’t part of OOP to break down a problem to become simplier and modular so that you can use those pieces to solve other “real world” problems as well?
His comments about Concurrency are a little weak since concurrency has always been an issue and has always been addressed. Read more about semaphores and content-switches, mutex etc for more information on this topic.
And the comment I love the most is in the summary. “In other words, this is the core of reasons why script programming; … are so popular.” He alludes to the fact that the reason why scripting languages are so prolific is because languages like C/C++, Java and LISP cannot solve real world (application instead of theory) type of problems. I, on the other hand find the answer to be much, much simplier. It’s because those languages are _easy_ to learn and any bill/jane can learn it. But in reality, there is a reason why EBay using Java to run its site and why Microsoft is rewriting large portions of the Windows OS code based to managed .NET code – it’s because languages like Java/C# and C/C++ are designed to solve those type of problems, they are not designed for newbies that what to echo “hello world” to the screen.
if I read this correctly, the author is (perhaps) pointing out the obvious: what goes on in the majority of software houses that crank out copious amounts of code, and what happens in some of the more exciting branches of CS research, is vastly different.
I have no idea if the average software monkey has any clue about “Prolog? AI? Genetic algorithms? Voice recognition? Agents?”. They may not, but a lot of us in the research community not only have a clue about this stuff but are using and developing it so it might one day become ubiquitous — usable by the average programmer. It takes time for a new technology to become usable enough that it may be usable by those who don’t necessarily understand it.
Once upon a time a compiler might have been considered something that no-one had a clue about, forcing everyone to code in assembler. However, these days its a fundamental tool to even the most unskilled software developer.
I hope that the standard of commentary at osnews has not been lowered to that of the (apparently) uninformed and/or uneducated. The rest of us deserve more credit than that.
I’m not sure I understood the point the writer of this article was trying to make. Perhaps his English wasn’t quite up to snuff to get the point across properly.
I guess the point I have is that I don’t believe any programming style/environment/methodology/etc. has “failed” because if some people can use it to get work done properly, then it works. Some people like good ol’ procedural C. If they can write a good app in that language, more power to them. Some people swear by Objective-C and Cocoa programming on OS X. If they can write a good app in that environment, more power to them. Some people are so-called “script kiddies” hacking away at PHP. If they can write the Internet’s next killer app from their upstairs bedroom, more power to them.
I say find the tools that suit you, learn how to use them effectively, and go for it. Who are we to sneer at anybody?
Jared
its all about business…
innovation my foot
A C programmer is at a great disadvantage when learning C++ and OO. The programmer must do a lot of unlearning before he or she can become an effective, competent C++ programmer (start with Scott Meyers’ Effective C++).
This perhaps is a major source of criticisms of OO and other modern software development methodologies.
OOP is all about abstraction. like physics/math is. The real world does not have physics/math. but as long as you keep in mind you are abstracting something as objectL: these objects really stands for real thing.
“The creators of Next thought it is (sic) insanely great but not many people, many enough for being commercially feasible, shared the view. Smalltalk was meant to be intelligent tools for all of us humans, but it simply did not solve our problems for computers: number cranking. Then we ask why? The following are reasons I suspect might be a case.”
First of all, NeXT engineers _did_ succeed. Their creation lives on, very successfully and commercially viably as the NS frameworks in Cocoa. Second, since when was computing about number cranking? What is a computer? There are only numbers in a computer because _we_ assign them to symbols and lower down, voltages. The Turing Machine? That works on _symbols_, not numbers. Ditto for lambda calculi–there’s a reason why Church numerals exist, it shows how totally irrelevant numeric constants are to computation. Your premise of falling back on numerical computation therefore makes no sense.
“No one knows what is OOP”? Not quite. Perhaps it’s hard to pin down an exact definition, but it’s a far fetch to say _no_ one knows it. Oop modeling the real world is a fallacy that you might’ve picked up, but there are others who didn’t. Smalltalkers are among the best example for this–that using objects to simulate real-world objects is only half the story. The other half is that they can be used to represent concepts and modules of functionality. Observe that it’s quite hard to map MVC to real-world objects. The point is, this limitation of OOP that you imply is a self-imposed limitation–you make what you want of objects, no more, no less.
“Concurrency is a fantasy.” It’s hard, but it’s sure as hell not a fantasy. There’s a diverse amount of research in this field and I’m willing to bet your exposure to concurrency starts and ends at Java or C++ threads. Try Concurrent ML to see another take on this.
“Haskell, despite the affection of computer scientists, is miserable failure. Lisp and its derivatives like Scheme are ones that are used largely and taught in school as a general functional programming language. Haskell may improve productivity dramatically but if non-geniuses cannot use it, it will remain as a toy not a tool for real programmers.”
Haskell and Lisp are far from failures. The real failure lies with people who refuse to learn anything past syntactical differences and knock languages down for being different than what they’re used to. Please, keep an open mind. A decade ago, garbage collection was often scoffed at, but now it’s a feature almost taken for granted. See that the same may occur to many of the items you listed as failures or things not worth caring about.
“Computers cannot be more than computers. And so we must try to solve problems in the way we know, or the way that was used to because the architecture of computers is still the same today as decades ago.” And what is this “way we know”? Crunching numbers?
I’m sorry, but what is the point of your article? You asked why we have recurring problems in programming and briefly mention “cranking numbers”, then you proceed to knock down a bunch of technologies–which proves nothing for your thesis–finally you wave your hands a bit and say we should return to old time-tested practices without saying what they are? If you’ll permit me to say so, you’re obviously unhappy with the programming world, but languishing in our own ignorance and publishing it is not a solution.
“recursive function is really, really bad, especially on an unbounded list”
That is, unless you use a system which is properly tail-recursive, like Scheme and many implementations of Common LISP. Instead of pushing a new stack frame for every function call, they reuse existing space for tail-calls (that is, calls whose return values determine the return value of the encapsulating function). This allows typical recursive constructs to execute in finite space – just like iteration, except that recursion feels more elegant to many, and it’s easier to make correctness proofs about it.
ever wondered why the laws of nature are still the same?
When do you use GOTO
“Concurrency is a fantasy; After all, computers do one thing at a time.”
The author is puting machines with one processor and two processors on the same bag! Concurrency is NOT a fantasy. Not all computers on the world have only one processor.
The article is too obvious. (rubish rubish rubish)
First of all, _every_ problem can be solved either iterative or recursive. The conversion between iterative and recursive implementations follow simple rules and can even be done automatic.
Second, but very important: You (and a lot of other people) have to learn first that OOP is a way to design code, but it is not a matter of using a specific language. You can write perfectly designed OO code with plain C or even assembler as well as you can produce procedural code with Smalltalk. Some of these language give better support for a specific style than others, but most of them do not enforce a style. So it’s basically on the author which style he chooses.
And, maybe even more important: An object oriented solution is only admirable if you want to solve an object oriented problem. This is the most common mistake “young” programmers who have been trained in OOP without having a look at the bascis.
Last but not least, for _sure_ computers are able to handle concurrent tasks. They have been for ages, machines with multiple processors have “always” been around. But on the other hand, it doesn’t matter at all. If used correctly, concurrency can ease your design instead of making it more complex. You just have to gain some experience and learn how to handle your tool.
Sorry to sound harsh, but I find this a really bad article. I thought it might say something like: Programming is still in the state it was decades ago, because people keep reinventing the wheel, instead of improving on other’s work. Instead it gives us statements that I just cannot agree with:
“No one knows what is OOP”
OOP makes a very good fit with the Real World in many cases. Not nearly in all cases, however, which is why LISP programmers (a language that can comfortably express about any paradigm in use today) used OOP for about 20% of their code – a significant portion, but far from all of it.
“Concurrency is a fantasy”
You ever hear of multi-tasking, multi-CPU systems, distributed systems, P2P networks? I’d say concurrency is the norm rather than the exception these days.
“Trying to make things look like happening at the same time only leads programming to the hell of scheduling operations, resulting in a profound fear that someday the program might halt all of a sudden due to deadlock.”
Sure, if the programmers were incompetent they may not get the concurrency issues right and come up with a system that behaves incorrectly. However, that is not concurrency’s fault; if you depend on incompetent programmers, you should be afraid even if the system is completely single-threaded.
The bit about reusability I don’t even get. Is the message that reusability doesn’t work? I guess that’s why we have so many shared libraries. I think reusability can work even better if people wouldn’t insist in making their own wheels.
I have never seen any Haskell code in my life, so I won’t make claims about it, but even if it is true that Haskell failed, so what? One language that didn’t make it does not mean others haven’t changed the world. Or perhaps Haskell was an example meant to illustrate the failure of purely functional languages. That, I would agree with. Functional programming is a beautiful paradigm, but it’s hardly a silver bullet. Some things are inherently non-functional (such as I/O). Also, functional programming does not feel very natural to many programmers (although that could just be because they were raised up on imperative languages).
“Prolog? AI? Genetic algorithms? Voice recognition? Agents?”
Prolog: again, that’s just one language.
AI: I’d say we have that. It hasn’t worked the wonders that some people expected from it, but it’s there. Think games. Think office assistant. Think cameras that automatically try to get the right focus.
Voice recognition: Every aspect of it works. People can be identified by voice characteristics. My iBook takes speech commands. I used to dictate my papers and let the computer write them out. All of these work with various accuracies, but they work. What’s the relation with the state of programming, though?
Agents: Well now. Ever subscribed to a career site? Or how about Google ads, or banner ads in Opera? Agents are out there. But again, what does it have to do with the state of programming?
The stagnation of computer programming is not due to a lack of good ideas – it is due to the overwhelming commercialisation too early in its development. Notice that the state of prgramming is frozen where it was when it first became commercially viable. Now it has momentum and will be too hard to change. Java was a bit of a circut breaker, bringing a slight change which has followed through to C#/.NET. However, it only suceeded in this small change due to enormous commercial backing.
The article started out interesting but went out of the way at the end.
* Concurrency is a fantasy; After all, computers do one thing at a time Any SMP computer can run program in parallel for real, no need to emulate this.
Lisp/Haskell
At my university we were taught ML(The language that Haskell is based) and not Lisp.
* Prolog? AI? Genetic algorithms? Voice recognition? Agents? …. Who has any clue about them.
I do! And anyone that takes a CS University Degree in any decente university. I had 2 courses with Prolog as main language and artificial inteligence courses as well.
Many of the issues raised in the article don’t seem to be from someone with a CS degree and industry experience. But
as someone that says some things about what he think IT is about. Sorry if I’m wrong but that’s my impression.
It seems like this was written by someone that hasn’t been in ‘the trenches’ of programming. Jared White made a great point earlier that a major point of programming is the decision on which tool to use. I’d say most people that swear by one language/programming paradigm either had never used anything else, or didn’t spend enough time to actually learn what they were using effectively.
As an engineering programmer (as in my degree is a mech-e and I do programming for analysis work at the moment), I switch gears from OOP (Java and C#) to functional (C and Matlab) all day long. I enjoy using all, and each in its perfect spot depending on a programs requirements. It is in this ability to figure out, utilize, and appreciate various available tools that really make a good programmer as opposed to someone that can do miracles only in their language on their system.
Today in this computainment industry vendors are making money and it’s a battle for control. The individual computer user doesn’t have very much control, even though open source systems provide a way. This way that is provided however is not clear, because nobody knows the big picture.
Amen.
I see so many projects use C “because it’s fast”, but actually because people don’t know anything else. And let’s be honest: C is primitive, which is fine for certain areas of system development, but for applications you would rather have something more high-level that did provide automagic memory management, for example. Oh, and all those buffer overflows that we get to suffer from would not be such a problem if we used a decent standard library that provided safe functions…
Computainment industry programmers will differentiate themselves with programming languages and operating sytems, or rather product lines, which are neither. That’s not much of a distinction, but just enough of a distraction. Why are we still at the beginning, they ask!
The claim that OOP modeling the real world is rubbish; diamond inheritance, subclassing, the principle of subtyping, danger in down casting, etc, etc, cannot be found in everyday lives
Many languages do not have diamond inheritance (luckily mix-ins fit for 99% of the use cases). Subclassing and subtyping are really obvious and present in real world.
And there are many people (smalltalkers, rubyist, pythonistas and so on ) that don’t have any problem with down casting at all.
Sure, you can mess up stuff so that you’re thinking for the machine instead of thinking in the domani.
That is the reason that frameworks like naked objects are so successfull, they bring OO backe where it should stay: simulating real world.
You’re young, that’s why.
could someone comment on new directions in logic programming languages? prolog is a bit dated now… and it has limitations that are a distraction from its intended aim… i wonder if newer more modern logic programming languages have emerged… prolog was very efficient, and its meta-interpreter abilities could be put to very good use!
Goto is the most elegant (and fastest) way of implementing state machines. The beauty is that the CPU instruction pointer maintains the state for you.
Couple this with coroutines, which allow you to suspend execution and then resume again from where you left off, and you have a truly powerful combination for any parsing problem you care to name.
I use this combination a good deal.
> Extreme programming works nicely because most of the time,
> what you have to do is code and test things very very
> carefully rather than solve problems you don’t know how to
> solve
eXtreme Programming is very useful when you don’t know how to solve a problem! It lets you discovers things in an evolutionary manner (and spikes helps a lot)
Your analysis of why Objective-C is overly simplistic. Objective-C is not popular because it is not marketed or promoted properly. Having learnt Objc, and then having to write Java, has made me appreciate Objc that much more.
It has nothing to do with having to write code “like” a computer. Objective-C is still C deep down, after all.
This article reads like it’s been written by an old-school VB programmer. In those circles, OOP and more than a single thread were quickly waved off as “you’re complicating too much!”
I have a developer friend that does everything he can do in either 100% C or in assembly. I can see using it for speed in certain situations, but when it’s extremely complex or just not speed-intensive, give me OOP any day. The ease of use and quick development cycle far out-weigh any type of speed advantage.
I find most of the development time is spent getting the product perfect to my end users. I would much rather spend time there getting interfacing and mechanics right than spend alot of time over syntax and deployment.
@Mat – I agree this sounds alot like the VB programmer chant; I have to say though, if my first language was VB, I probably wouldn’t look to use much else. For what most people use it for, it ‘just works’, and is very popular (I’m no VB man- to tell the truth me and a VB guy at my job ‘compete’ language wise).
I was about to write a response, but most of the issues brought up in the article were already adressed by other posters and… well face it: the text is rubbish.
I must say, though, that I am rather surprised that this … text made it on the OSNews website. This is not an article
or an interesting opinion, but only a rant by some computing/programming newbie.
I know his type; he’s probably a teenager who just finished reading his first book on C programming, and is now certain he knows all there is to know. He probably also overheard some teacher or older colleagues (that he looks up to) talk derisively about Computer Science people and now thinks he has to loathe Computer Science as well.
Oh well… he’ll grow up; hopefully he’ll learn more about the programming craft and will be embarassed about this text;
or… he’ll end up being a mediocre C hack, producing code filled with Buffer Overflows and memory leaks. Oh well…
I’m just wondering about one thing: He claims Unix style I/O is different than Java I/O? Both use Streams; where’s the difference?
Please Eugenia, or whoever posted this thing in OSNews…be more selective.
This article is absolutely mediocre and offensive for anyone who actually spent time learning computer science.
> … to functional (C and Matlab) …
functional programming in C??
Surely you jest.
>> … to functional (C and Matlab) …
>
>functional programming in C??
>
>Surely you jest.
Well, I don’t think Ellootre is joking, this is just a rather common mistake (that “functional programming” is non-OOP programming, when in fact “functional programming” is the alternative to “imperative programming”).
And yes, one could try to do things in a functional manner in C (by using only constructors and selectors and avoiding mutator functions) but that would be rather awkward and it would require some special memory management tricks (allocate a lot of temporary stuff in a “local” heap then discard all the temporary stuff in one step by free()-ing the entire local heap and keeping only the “final” result on the stack or as a separate object on the “global” heap). But I repeat, this is quite weird.
Programming is not the same as decades ago. Decades ago, languages were harder to work with. Let’s examine old BASIC:
1. Editing was made mostly through line numbers. Hopefully you had the RENUM command to help you out.
2. Numeric variables were using slooow floating point math. Integer variables were introduced later and you had to remember to use % or to do a DEFINT.
3. Some older BASICs did not even have ELSE in their IFs. Multiple commands for the IFs had to be rolled in the same line with colons, or rerouted through a GOTO.
4. The use of GOTOs and GOSUBs could be a mess if you were not careful.
Some people had to go straight to good old assembly and handcode much of the stuff that just flat out needed performance. Independently of the current ease (or lack) of coding for moderm CPUs, it was harder to code CPUs back then. Ever coded for a 16-bit real-mode system (8086). Ever coded for the 6502/6510 (used by the Commodore 64). That CPU had only 3 8-bit general purpose registers, and you can only even do some stuff on one register. There was no multiplication or division.
Other languages such as C eventually came but they were not as easy to use back then. You had to be careful to do proper memory management under a segmented system such as DOS in real-mode, and what with all the compability concerns (EMM, XMM, yadda yadda yadda).
As far as modern languages, I prefer Java and C# (SYNTAX-WISE) because, once you learn them, you have a lot at your disposal. However, I think that the APIs are a killer. As APIs evolve, many of them hang around bloating your system. ODBC… ADO… JDBC… etc… Then there’s DirectX… OpenGL… there used to be Glide and there’s still Java 3D… I believe developers are more productive when APIs are less and more solid and straightforward, and the systems do not rely on legacy stuff, but then again…
10 years ago I was in highschool and they tought us BASIC, PASCAL, DBASE and C. Today they teach the kids PASCAL, VISUAL FOX, C and C++. It is a step forward BUT the inertia is enormous. Why don’t they teach PYTHON (Algorithms), PYTHON (objects), PYTHON (GUI), PYTHON (patterns)?
A few points —
“Reusabiliy, once seen as a key to solve never-materialized software crisis, is a fallacy of false analogy”
This is so far from the truth in the corporate world that it hurts. We spent quite a bit of time setting up a reusability framework for our (very) specialist area. Turn around time on new projects now is almost 1000% percent faster using the framework classes than rewriting, making our clients (and my boss) much happier. Plus, a fix in the framework fixes five products simultaneously. Dynamically loading class frameworks means that we ship a single patch DLL rather than having to reship our entire product for an upgrade or a bug fix, plus reducing our testing and deployment requirements for each change.
Where would the world be without MFC (ok jokes please), Win32/FX, libXML, OpenGL, DirectX, Qt, standard libraries, etc. etc.? You talk about writing your text editor — try doing the GUI without using a reusable class such as CWindow/CView (MFC), NSWindow (Cocoa) etc. you’ll find (unless you’re using VB or the like) that’s it’s quite a challenge.
“A software component can be copied without any cost”
Until you find a bug in the original component code, and then have to make the same bug fix through EVERY product that you’ve written, rather than just re-linking against an updated library. Or, you find a huge speed improvement in the original code, and to affect that change in all your products you need to manually change each copied section of code EXACTLY the same. Then test each new change individually. The cost is huge in comparison.
“Concurrency is a fantasy. After all, computers do one thing at a time”
Again — ouch? What about multiprocessor boxes, multicore processors, hyperthreading, altivec? We have clients with 16-processor machines, if our (server) software could only use one processor at a time, not only would our clients be annoyed, but throughput would drop drastically.
“Prolog? AI? Genetic algorithms? Voice recognition? Agents? …. Who has any clue about them.”
Guess you’ve never played a (recent) computer game. Imagine Half-Life without AI. We use remote agents in our (shipping) products, and use AI (ie. learning behaviour) techniques to manage load-balancing and process-path optimization. Where would Anti-virus programs be without heuristics (eg. AI)?
And these are just a few comments — there are so many things wrong with this article that it’s hard to find anything that’s right.
Sometimes the best solutions come from having a clue about these things beforehand. Maybe open a book and read up on these topics before writing such uninformed, inane banter and pretending that you have a clue.
My guess on this article is that it’s meant to be a troll of sorts and I usually ignore them, but considering this is a ‘page one’ topic on osnews, I couldn’t resist.
Furthermore, look at what the individual can accomplish. You can literally ( litter alley ) build a distribution to taste from source code, and put arbitrary languages and servers on it.
The hardware/software possibilities boggle the mind. Much has been done, in a brief span of time.
Plenty of room for improvement, but we still rock.
At the start, he mentions recursive functions to handle an unknown number of multiple items, in a *functional language*. WTF? Use “map” … that’s what it’s there for – it reduces it to a single bloody line:
map someFunction someList
> The Reusabiliy Fallacy
This is so far from the truth in the corporate world that it hurts. We spent quite a bit of time setting up a reusability framework for our (very) specialist area. Turn around time on new projects now is almost 1000% percent faster using the framework classes than rewriting, making our clients (and my boss) much happier. Plus, a fix in the framework fixes five products simultaneously. Dynamically loading class frameworks means that we ship a single patch DLL rather than having to reship our entire product for an upgrade or a bug fix, plus reducing our testing and deployment requirements for each change.
I couldn’t agree with this more. I spend a little extra time building a generic framework for certain application areas and it saves me huge amounts of time on subsequent projects in that area. For instance, I spent the first 2-3 years at this company building software interfaces for hardware that communicates serially over RS-232, RS-422, or TCP/IP. Building a simple component to send and receive data over RS-232 or TCP/IP based on user (or administrator) configuration and then reusing that component (in combination with an RS-232/422 adapter in hardware) saved more time than most people would believe, especially in debugging cycles where new equipment sometimes used different commands from older versions. The applications for 7 different pieces of equipment with as many as 4 different command sets per item (in other words, 4 incompatible versions of the same type of hardware) have a generic input and output command that gets routed to/from the component rather than having to deal explicitly with the communication interface. Not to mention other areas of reusability, such as user interface elements (for example, a display that allows the user to press an up or down arrow for each digit of a 5-8-digit value that handles roll-over, decimals, and min/max values, also allows the user to click on the value itself to input a specific value, is administrator configurable, and hands off the data in a manner that is easily adaptable to the hardware.
Those are just simple examples of reuse. In many cases, entire applications were made from a handful of custom components, many of which were later inherited or updated to add functionality for devices that didn’t exist when the initial software was written.
Where would the world be without MFC (ok jokes please), Win32/FX, libXML, OpenGL, DirectX, Qt, standard libraries, etc. etc.? You talk about writing your text editor — try doing the GUI without using a reusable class such as CWindow/CView (MFC), NSWindow (Cocoa) etc. you’ll find (unless you’re using VB or the like) that’s it’s quite a challenge.
and even if you’re using VB or the like, you ARE using those reusable classes, you’re just not handling the dirty work yourself. Look through the code in a VB.Net project sometime and you’ll find all of the details of dealing with Windows.Forms are very similar to the way the same thing is done in C# or Managed C++.
“A software component can be copied without any cost”
Until you find a bug in the original component code, and then have to make the same bug fix through EVERY product that you’ve written, rather than just re-linking against an updated library. Or, you find a huge speed improvement in the original code, and to affect that change in all your products you need to manually change each copied section of code EXACTLY the same. Then test each new change individually. The cost is huge in comparison.
Exactly, copying code only has no cost to someone that doesn’t pay (or get paid) for a coder’s time. Re-linking is computer time, which doesn’t always have to impact a coder’s time significantly. Re-writing/Re-factoring/Re-implementing/Re-copying code is something generally done with some degree of human interference that’s at best a little more significant than re-linking or even a complete re-compile.
And these are just a few comments — there are so many things wrong with this article that it’s hard to find anything that’s right.
This is so true that I’m not even going to add my own comments to the rest of your post. The article can only fall back on poor english for an explanation of how it managed to get so much so wrong.
people
i totally agree with the author.
you should check labview, softwire, genexus and deklarit
i think those are the future development trends, and they work really well. genexus includes and ai module. with all the requerimients defined, develops all by himself the entire app.
deklarit is the little brother version of genexus, working as a plugin of vstudio .net
softwire and labview are both graphical developments environments (labview is taylored to the automation industry, and softwire is a general purpose tool).
in my experience, all these 4 environments have the advantage that designing and developing are the same procces. they are worth a try!
Only because *other* people took the effort to program the components you use in a more difficult programming language.
Why is using scripting languages more productive?
Because you reuse a lot of stuff others made for you, and because you’re allowed to take short cuts. And that means you’re able to program without having to think as much about what you will be programming.
So that might also be the drawback with more ‘detailed’ languages (call them low-level, whatever), you can’t make as many short-cuts, so you have to think more about how you will acomplish something up-front.
But it also means that the quality of your code can be much higher. If you take the right approach. If you don’t take the time and effort, your code will be crap and you’d better be off programming VB.
Maybe that’s why people like Java and C# so much: you get a language that is suitable for detailed programming, but still a huge library you can use.
Oh by the way, in VB you’re extremely dependent on the quality of the components you use. If they’re crap, then your productivity will sink like a stone. I think all VB programmers can remember a time that a component should’ve done domething and it just didn’t work. And your 2 days estimate suddenbly became a 3 week 18hr a day pizza-eating effort.
I see this as reaching a difficulty point. With Java, everything takes more effort from the start, but when your project grows and gets more complicated, the effort of programming stays the same, or even gets lower if you know what you’re doing.
With VB you can do a lot with little effort, but you can suddenly get to the point that you have to do impossible things just to get something working.
Every language has it’s pros and cons. The trick is to use the right tool for the job.
Oh btw, in a certain sense I think the writer of the article is right. Maybe the fact that we still use similar languages *has* to do with upbringing. As a programmer with experience in C/C++/Java/ObjC/Pascal/VB I find it difficult to learn a programming language which has a very different syntax, or a really different approach, like functional languages.
It’s like the shift form procedural to OO, or lineair to event-driven, or AOP. Hard to get your head around, but when you make the shift, it takes no effort at all anymore.
I keep hearing the same thing from sysadmins that occasionaly
need to write shell or perl script or even a piece of C code.
DG
It’s rather clear that the author lacks knowledge of many arguments. Many have been pointed out in previous comments, the most clear example is probably the one about concurrency – but the author hasn’t for sure listended an mp3 while doing some boring work or surfing the net.
Please, read more before writing.
Why was this posted to the front of OSNews? This kind of ignorant editorial being prominently displayed simply lessens the credibility of OSNews as a legitimate technical news source.
It boldly touches on so many areas, essentially calling them all bunk, and not even attempting to give any justification other than the implied “I think”.
Hopefully the editorial staff at OSNews can avoid this in the future, or at least relegate it to a forum-type section.
Agreed, I’ve seen a few articles that are below par. On the one hand I think it’s nice that people have a soapbox and that you give an opportunity to start a discussion. On the other hand it lowers the credibility of OSNews itself.
people
i totally agree with the author.
I think you read a different article.
you should check labview, softwire, genexus and deklarit
LabView has been extensively used in my office in the past, and is currently being reviewed for future use. SoftWire is also part of the review for future use, as a trial version was shipped with a number of cards we purchased over the last 2 years (which are also supported by LabView). In the end, they’re primarily object libraries with a development environment that makes prototyping extremely easy. Going through the article will show many cases in which the author more or less states that this is either undesirable or just impossible, yet LabView, especially, has been around for quite a long time and works quite well.
i think those are the future development trends, and they work really well. genexus includes and ai module. with all the requerimients defined, develops all by himself the entire app.
deklarit is the little brother version of genexus, working as a plugin of vstudio .net
Again, the possibility that any level of AI could exist which does this is assumed invalid in the article.
softwire and labview are both graphical developments environments (labview is taylored to the automation industry, and softwire is a general purpose tool).
Perhaps more important than the graphical front end that these tools offer is the object libraries they put together. In almost every case the hardware that these tools abstract is otherwise accessable only through (sometimes obscure) low-level libraries. In both cases you can get beyond the GUI and tailor the code to your needs, but the object libraries allow you to prototype a solution extremely quickly.
in my experience, all these 4 environments have the advantage that designing and developing are the same procces. they are worth a try!
That’s a statement I can’t really agree with. There should still be some level of design before you touch LabView or SoftWire. Like I stated previously, you can do some very fast prototyping in these environments, but you should have some sort of design before you even buy the hardware these tools are often used with (at least in the areas I have encountered them).
Still, the 4 environments, if anything, prove exactly the opposite point that the article appears to make. We’re not limited by the computer’s architecture, object oriented programming and reuse can be a good thing, and higher-level languages and interfaces can and do help working developers.
The author is correct and insightful in stating that reuseablity is a dream. Hasn’t decades of development taught us anything? Shouldn’t we have software systems that have abstracted extremely complex operations in an automated, intellegent & self-executing fashion by now? Shouldn’t it ubiquitous?
The author is correct and insightful in stating that reuseablity is a dream. Hasn’t decades of development taught us anything?
Decades of development has taught us that while we are advancing more quickly than almost any other industry in history, we still have a long way to go. Software has spent much of the last 2-3 decades evolving rather than being rebuilt every couple of years from scratch, which is in itself a testament to reuse. Software that was developed 2-3 decades ago can, in some cases, be run on computers available today running modern operating systems, even when it was often assumed then that no one would be running that software for more than a few years. At the same time we’ve had the evolution of the GUI, near-constant increases in the speed and capabilities of the hardware, a fundamental shift in the interface hardware for computers (the mouse), and the development of a multitude of programming languages either for specific or general use, many of which have faded into obscurity. The very longevity of a language like C or C++ is a testament to the design of the languages, but the design of languages like C#, VB, Python, Perl, Java, and so on are signs of the gaps to be filled.
Shouldn’t we have software systems that have abstracted extremely complex operations in an automated, intellegent & self-executing fashion by now? Shouldn’t it ubiquitous?
Evolution and reuse of software often leads to slower transcendence beyond the existing system. The mass-marketability of software has also slowed the evolution of software (because breaking compatability is extremely taboo, for example, and people are unwilling to try a new OS if it doess not have X, Y, and Z applications).
Still, how do you define an extremely complex operation? How abstract must it be? What determines whether it has been done in an automated, intelligent, self-executing fashion? How many people really want computers to do everything for them without being told what to do?
C# and the .Net runtime make reading and writing from the console as simple as Console.Read and Console.Write (or ReadLine and WriteLine), but for some that’s just not simple enough. Reading and writing from a textbox on a form becomes as simple as someStringVariable = textbox.Text or textbox.Text = someStringVariable, but for some people that’s not simple enough.
Should it be ubiquitous? Well, different people have different views of how it should be done. At some point it will become ubiquitous for a certain percentage of the population, but there will always be someone out there that uses Linux without a desktop environment and still wants to program everything in C, which is still a step up from coding directly in an assembly language.
“Shouldn’t we have software systems that have abstracted extremely complex operations in an automated, intellegent & self-executing fashion by now?”
Isn’t a Windowing system a good example of this? In the DOS days, everyone had to write their own window management, and handle the notifications and feedback and drawing. With systems like VB, Windows Forms, Interface Builder and the like, aren’t these a good example of how ‘extremely complex operations’ have been ‘abstracted’ into a ‘automated, intellegent & self-executing fashion’?
And following the same logic, all scripting languages such as perl are effectively reusable libraries that sit upon tried and true languages such as C and assembler. Try writing a complex string searching function in C and Perl and you’ll find that Perl is a far quicker, more elegant solution (ie. abstracted away from how we ‘used to program’).
And echoing the comment by daniel c, products like SoftWire abstract these complex issues even further away from ‘how we used to program in the past’ which I believe was the authors original point.
I agree wholeheartedly with the comment by KamuSan — ‘Every language has it’s pros and cons. The trick is to use the right tool for the job’. It’s very easy to write a 60-storey doghouse using the wrong tool. Try writing a Windows device driver using VB and you’ll find that you’ll have no end of problems (if it’s even possible). Sometimes to get the job done you have to think, rather than just draw pretty dialogs in VB.
he he stupid? in the real world, lamps, couches, dogs, cars, etc. all have certain ways that you interface with them and no other way. that is exactly the same idea behind OOP. each object has certain ways you interface with it and not other way.
where did this guy learn about programming languages? or does he think that learning to use a language is the same thing as learning about programming languages?
people
i totally agree with the author.
(…)
softwire and labview are both graphical developments environments (labview is taylored to the automation industry, and softwire is a general purpose tool).
in my experience, all these 4 environments have the advantage that designing and developing are the same procces. they are worth a try!
If you agree with the author, you might think these tools pop-up like mushrooms, otherwise someone still gotta create them, and very likely using all the things the author thrashes out – component reusability, concurrency, etc
How interesting. One of the worst articles I read on OS News sparked the most intelligent thread of comments I have ever seen here.
The article could have gone in some interesting directions, were it more carefully thought out. But the author’s arguments are weak at best.
1. No one knows what is OOP. There have been countless books and articles on object-oriented design principles, and there is general agreement on its usage. Certainly there are disagreements, but the basics of instantiation, data hiding, inheritance, polymorphism, etc. are well-documented and well understood.
2. Concurrency is a fantasy. No; large applications must be designed for concurrency. Whether built on a large SMP server, or on a cluster of hundreds of small, cheap systems, concurrency is real, and is more important than ever. The good news is that in many cases, we can use existing tools (such as databases) to hide concurrency from most developers.
3. While it’s true that software component cannot be reused carelessly, there are countless examples of reuse, particularly in the free software world, where there is a vast body of work available at no cost. Every time you make a runtime library call in your language of choice, you are reusing code. Impressive programs can be written in Python, for example, using its huge set of built-in libraries. Registries or databases are used for persistent storage, HTML for user presentation, and various libraries for file and data compression.
4. Haskell… is a miserable failure. I personally find functional programming languages difficult to read, and (Emacs notwithstanding) agree that they haven’t caught on as mainstream languages. But only a few new ideas take the world by storm; the rest will be specialized tools or academic curiosities. That in itself does not prove that programming hasn’t changed in the past few decades. (Similarly for AI, genetic algorithms, and the like). Many would argue that OOP has changed programming; the web has undoubtedly also changed application design.
One might be able to argue that the programming paradigms used today evolved from more traditional ones, and that things haven’t changed as much as one might think. But it would take more research and reflection than this article contains.
I wish that the OSNews staff would change their attitude about spelling and grammar. These things are IMPORTANT. Having proper spelling and grammar helps to establish credibility and makes the text clear and understandable to the reader.
Of course, I don’t blame authors for poor English. That’s excusable if English is not their first language. There are plenty of languages I don’t speak. But, for the reasons mentioned above, articles should be proofread and corrected before they are posted.
My word. This was a bad artical. Regardless of the quality of the english it is bad.
One sided, without balance a list of points. A bound to start a load of banter about how bad it is.
Does OS news supply any sort of vetting in the articals it publishes ?
I used to come of OS news for really informative view and information but publications like this will turn me away.
When really a little bit of care, and sending the artical back to the author with this suggestions for improvement could have save me.
Please folks don’t lower your standards.
“Stay in high school and for godsakes, learn proper English and grammar. Yours is terrible. Whatever point you were trying to make was lost.”
A little bit harsh, considering that the original author is probably writing the article from a country where english is not the native language. However, I agree that his point was most likely lost in the (poor) translation…
…it doesn’t mean that they are totally useless. Being a computational linguistics student, i write programs to crunch lexical items and not numbers. it is only the past year or so where i’ve started to realise that you can’t have a “one programming language fits all”. there have been times when i was doing a java program and a problem arose that i thought “if only i could do this in prolog”.
of the different languages i know (java, prolog, perl, lisp, shell script (is this a programming language??)), i can think of various problems that can be easier solved by each langauge.
I couldn’t read this article, too poorly written.
Regarding:
“And just for the record, a recursive function is really, really bad, especially on an unbounded list (unbounded = who the heck knows how big it is at runtime). That’s a good way to whack your stack and core dump/crash. ”
I’ve used recursion for image and signal processing for 20 years. It’s a very useful tool and has never failed, cored,
or crashed. My long term *experience* with recursive algorithms would tend to outweigh your novice *theory*
about it.
Maybe it’s your technique…..
“A C programmer is at a great disadvantage when learning C++ and OO. ”
LOL!
Jeese!, I laughed so hard at this I thought I was going to
swallow my tongue.
Must be, yet another, Visual BASIC programmer.
Shake your head buddy. Learning C++ from C is a
2 hour job.
I couldn’t agree less with the article. The author seems to be arguing that since new programming methodologies are difficult for C hacks and script kiddies that we shouldn’t bother with them. Unfortunately, software is getting more complex. New ways of programming are required to avoid making a mess of it.
Additional comments…
Reusability is NOT a dream. The dream is that you get reusability for free when doing OO programming. The reality is that making code reusable requires extra work. If it isn’t specifically designed to be reusable, it won’t be reusable.
The main advantage of OOP in my experience is information hiding followed by the organizational niceness of objects. Unfortunately, OO design is often done poorly by previous C or script programmers who haven’t learned to change the way they program.
Concurrency is the FUTURE of programming. Look at the hardware. Intel, AMD, IBM, and Sun are all going multi-core. Also, any sort of distributed programming requires a working knowledge of concurrency. Yes, concurrent programming is hard, but it is not a problem space that can simply be ignored.
“And just for the record, a recursive function is really, really bad, “
Hehe try writing an iterative raytracer 🙂 There are some algorithms that run far better recursively than iteratively.
“…especially on an unbounded list (unbounded = who the heck knows how big it is at runtime). That’s a good way to whack your stack and core dump/crash.”
Probably more of an indication that the problem would be solved better using another approach.
uhh, no, not really, learning C++ from C is more like a 1-2 week job plus lots of experience. we are not talking about using C++ as a better C, we are talking about using all the class capabilities like polymorphism, virtual functions, multiple inheritance, etc.
hell, a moron can learn the basic syntax of C++ for procedural coding in 2 hours, but to really learn the power of C++, you need much longer if you are coming from C.
Obj-C from C is a much much better option, in this case you are talking about 2-3 days to learn the ADDED language features, and since the OOness is inspired by SmallTalk, it is fairly uncomplex, unlike the multiplicity of interactions one could get from C++.
well, you can create highly reusable code in almost any language (some easier than others.) OO languages lend themselves well to making highly reusable code with less of an impact of productivity than had you tried to do the same thing in C or Fortran.
Actually, I found ObjC more confusing than I did C++. The ObjC lingo probably confused me the most, along with the concept of memory allocation and deallocation.
The concept of calling C ‘functions’ to C++ ‘methods’ has a great deal more similarity than to that of ObjC ‘messages’ (expecially when it comes to parameter list declarations). And likewise with memory management, malloc->new and free->delete translation between C and C++ (and knowing C you’ll understand that if you allocate memory you must free it) is a far easier concept to grasp coming from a C background than the memory managed scheme of ObjC (alloc, release, retain, auto-release).
“hell, a moron can learn the basic syntax of C++ for procedural coding in 2 hours, but to really learn the power of C++, you need much longer if you are coming from C.
“
I couldn’t agree more.
Shake your head buddy. Learning C++ from C is a 2 hour job.
Spoken by someone who must think that the only difference between the two is switching to cin and cout in place of scanf and printf.
It took you two hours to learn that? At that rate, you’ll never learn C++.
Shake your head buddy. Learning C++ from C is a
2 hour job.
Only if you already know OOP. Otherwise you’ll end up with C-style programs, where structs have been replaced by classes, and a bunch of public getters and setters.
“resulting in a profound fear that someday the program might halt all of a sudden due to deadlock.”
Programming always results in a profound fear 🙂
If I understand correctly (and I may not) the author is trying to say that the reason today’s programming is still similar to programming in the past is that there really is nothing more to programming than what was done in the past. The languages and techniques of “decades ago” must be just fine for today.
Actually, programming today is very different from programming was decades ago. I first started to program two decades ago, and today’s tools are far, far better than anything I saw at the time: integrated IDEs, tools that don’t require minute configuration to set up, abstraction, reuseable (and much reused!) object-oriented class libraries… Much of my programming today is done with Maple. The amount of code reuse I do is staggering compared to the past.
If programming today is in some ways not much different than it was in the past, that’s because we learned how to solve some problems in the past in a reasonably good enough way, that the people who still work on those problems, learn to program in that manner, and lack either the need or the interest to learn new models of programming. (Hence, much work in math and physics is still done with Fortran… for better or worse.)
But today we solve lots and lots of problems that, while they could be solved according to old methods, only a fool would use the old methods. To repeat an argument I read 15 years ago (and is probably much older): It’s true that assembly language gives you perfect control over each “grain of dust”, but what idiot would build the foundation for a skyscraper, using a teaspoon?
Eugenia,
It’s clear you’re a very good judge of OS related articles, however most of the programming articles posted on OSNews have been major flops – judging by the comments. I know there’s no easy way to swallow that – but do what you do best – OSs!
This little article is very insightful and I personnaly agree with most of it, especially with the OOP delusion and reusability is a fallacy, and more generaly how computer science’s obsession with the factorization of algorithmic patterns and data structures has proved improductive historically.
But I differ radically over concurrency. Concurrency does not exist in the ISA (instruction set architecture, how the software “sees” the hardware), precisely because language and algorithmic research has wasted so much time on delusions such as OOP, reusability or genericity, and never really tried to enture out of the sequential execution model.
But concurrency definitely exists in the hardware space (all transistors on a chip are potentially useful at the same time after all), and the abstractions that are at the interface between the ISA and the hardware concurrency, such as super-scalar, pipelining, hardware multi-threading, etc, are not aknowledged enough by software developper.
That is in this space that programming language research has a lot of exploration to do.
I probably should have used better phrase for that like ‘non-OOP’ languages (Matlab isn’t functional either). At any rate, what can be demanded after reading such an article at 4am?
I guess it should have been: “OOP langauges (C# and Java) to non OOP languages….”
🙂
I’m assuming this article was posted to intentionally spark high-quality debate, notwithstanding the fact that the article itself is naive and wrong.
And please give this poor writer a break. I’d like to see the people that complain try to write an article in Japanese that is half as well written as this one.
On the other hand, though, I really had a hard time trying to parse the article. Maybe someone can volunteer time to profread grammar and spelling?
I’m no proponent of VB. I find it ‘just works’ for VB zealots because a many (not all in any means) only know a snippet of programming in that language, and usually base their projects according to the bounds of it (which can be said true of any software zealot).
On further thought- could it be that this ‘stagnation’ the author feels is due in part to their unacceptance of other languages? I wouldn’t find it a wrong statement to say “People that do not know more than one programming style generally have a confined world view when it comes to program possibilities.” I know if I wasn’t open to any programming type I could find the world would be a small place indeed! If this statement were true, then he, like alot, would be in some sort of self-made prison, unhappy only because of misunderstanding.
For the real CS guys out there, don’t they teach some sort of ‘love a language’ programs or anything? I would think a course where rapidly switching languages for different diployment needs would well-round any type of CS type education.
I’d like to add to the growing list of people who think this article is garbage … a black mark on OSNews’s pretty face.
“For the real CS guys out there, don’t they teach some sort of ‘love a language’ programs or anything? I would think a course where rapidly switching languages for different diployment needs would well-round any type of CS type education.”
I’m not sure what you mean. At my school we start in c++ (even though we should be starting in c and moving to c++). They teach a broad range of other courses, some titled by the language (I intend to avoid those classes at all costs). Generally though, Computer Science is taught as that, the science of programming a computer. So they try to focus on algorythms, data structures, how things are actually getting done on the computer, and things like that. Then they teach you the syntax in a certain language so you can go home and toy with it. I think they pick c/c++ for beginners because the language maintain strong abilities to manipulate things with pointers, and because they are fairly well typed. Teaching in a language like PERL would be silly, because the students would come out thinking integers and strings and characters are all similar. Teaching it in VB would propose the same problem. I see a lot of people who use VB think of strings as scalar values… This leads to some innefficient code. It happens in c++ too if they fall in love with the string class.
Languages are tools, just like hardware. It’s the logic that’s the science. The idea is that hopefully the methods, math, and procedural skills gained will be easily transferrable to multiple kinds of hardware, systems, languages, and etc. Maybe someday we will be so good we can program people (write how-to manuals) !
omgomg!!!!!!!!!!!!!!!!1111!11
Some people are so-called “script kiddies” hacking away at PHP.
Do you mind clarifying your statements so as not to perhaps inadvertently associate the professional programmers, who happen to build robust solutions in PHP, with “script kiddies”, who happen to download some w4r3z b0tz and declare themselves great h4x0rz because they changed a few lines to make it say obscenities without resulting in a parse error. K-thanx.
That roadkill you see in the center lane is a programmer who forgot to get out of the way as progress passed him by.
Old style and old tools do not create good code — they create great opportunities for offshore programming.
Change everything about the way you code!
A ton of people are commenting who have obviously only been exposed to procedural and its complicated cousin(OOP).
Encapsulating information in one place is a great idea for programming, but the type of OOP pushed on the computing world is modeled around the notion of types, which are unchanging roadmaps on how data and logic are structured. I have yet to see proof that type-centric oop has any benefits outside animal/shape taxonomies and possibly GUIs( actually other programming methods such as prototype based languages do a better job there). OOP was a good idea until someone decided types are the end all be all of data driven design(which is what OOP is when you get down to brass tacks).
If more languages allowed easier data manipulation, dynamic function creation, and faster development(dynamic typing), and message/event driven methods the programming world would be a much better place.
The best language overall would be lua or python, imo. Lisp/Scheme are more powerful, but readability is a factor despite what CompSci professors tell you….
Why is it not possible to report abuse even for articles (not just posts) – this one really needs it.
This article is garbage; shouldn’t even be called article. I wonder why is it on os news at all (like few of other asked too). It’s obvious that this man doesn’t know what he is talking about – or just want to provoke with some obvious “untruths” that are not worth commenting. I also wonder why so many are trying to explain OOP or councurency for someone who obviously already made his mind &/| is not capable of understanding those things.
I guess he is one who code in delphi or VB because of IDE rather then language features ‘coz he doesn’t know what language features are and how to choose a language to suite the work.
It also seems that this person read more about programming then doing programming … but who cares about him;
I could maybe say this article is amusing, but I didn’t laugh so I’ll not say it; I think it’s because of style how it is written; it’s ment to sound like “insight words”; but actually uncovered how little insight and knowledge writter has.
But … It is amusing to see so many “grey heads” looking up to defend the thruth . No ppl this article can’t hurt; only fool will fail to see this man doesn’t know what he is talking about.
On other side it’s interesting to read some responds about learning languages and CS. I’m a student, and just taking a course in C++ . I am almost done with my studies and I knew pretty much (I belived before) about C & C++, & took this course just for getting some cheap points. But after some tricky excersises I actually realized how little of C++ I was using. My code was for the most C that looked a bit like java coded in C++ . Yeah – sounds weired. However, if anyone think that learning C++ is a matter of few hours, then he doesn’t know what he is talking. Syntax rules are probably learned after a few well thaugt examples, but syntax rules are not knowledge of problem solving;
Designing programs that will last in terms of expanding them over time, or a good library that will be reusable for others will take much more knowledge of how c++ works then just knowing language syntax (even if you have several years of java experiance and some C/C++ as I did).
Like someone said every lang has it’s cons & pros and it’s actually required to know those for any serious work to be done.
and for jcat: a programmers should not go away – they should acquire knew knowledge; knowledge of new technologie doesn’t dissapear with birthday; nor does it come with diplom after 22nd birthday – it only comes with hard work. Btw who do u think created new technologie?
To end this I must say a word about statements of type: “who knows waht prolog, ai, concurency etc. is … I can only tell one: go to university, study CS, take classes, educate yourself. In your article you are starting from your own (un)education and think that whole world should live in same (un)education as you. I know this whole respond is a big parafrase on anything said here and elsewhere but here we go again: you are a prime example of person unaware of how little you know.
cheers
PS:
yes – I need to work on my english; It’s not my first (nor second) language; but if you want you will understand.
Eugenia, how do we “Report Abuse” in an article? I *do* feel offended by many of this article’s claim.
That’s what I’m reading. You sound like a frustrated programmer being pushed by your contractor just a little too hard.
That’s the only problem with programming today: programmers with a lack of perseverance and optimism. As programmers we must be willing to learn and to implement; otherwise we risk becoming stagnant and erroneously pessimistic.
My suggestion: if you don’t have a clue about something, either learn about it, or refrain from discounting its relevance with these ramblings.
a system to score articles as well as posters would be good. By reading this comments it’s to obvious these people don’t know much about VB in particularly about the evolution of VB and Visual Studio and the evolution of other languages, platforms and IDE’s and how they compare to each other. VB is since 4 years one of the many syntaxes for .NET and except for the code editing and some goodies that one .NET language might get a little later than the other for dumb reasons they are all similar
They’re called messages because they behave more like messages than methods. You can send a message to any object in ObjC, and let the object that’s going to receive the message handle it (by forwarding or whatever), but this isn’t true in statically typed programming languages such as Java – all the methods a class must be known before it is run.
One situation that goto is great for is when you’re dealing with an exception inside a function that’s a control loop. You need to clean up a whole lot of state, but unwinding everything gracefully would make the flow so difficult to understand that the ugliness of using a goto outweighs the lack of comprehensibility your code would have.
In simpler terms, goto is great for in-fuction local exception handling. it’s siblings, throw/catch (known as setjmp/longjmp in C) are the non-local versions.
Most network implementations I’ve seen have a bunch of cutouts in the packet proccessig code, because it’s just easier (and faster) to jump to cleanup than it is to unwind in order. It’s engineering trumping over theory, but that’s how it is sometimes.
Hah! Looks like this article inadvertently turned into an engineering discussion!
In the examples I’ve read, all the lists are bounded by one thing or another. An image is only so big, and that’s known beforehand. A signal is only so long, especially in realtime.
I was thinking about commercial apps, where your user may accidentally run a query on a multi-GB database that returns every row – and you for some reason need to iterate over that resultset. Recursive function = goodbye world. To me, most of the scientific data sets are pretty small, and the inputs are user-controlled (for the most part).
There you go! Another example!
“user may accidentally run a query on a multi-GB database that returns every row – and you for some reason need to iterate over that resultset. Recursive function = goodbye world.”
This misconception isnt true, despite how many times it is posted.
Proper tail recursion requires NO STACK SPACE.
Recursive function = goto loop = faster than OOP method call.
Go read up on Scheme and learn a different paradigm than procedural.
…hehehe….the author has a point. I remember in University when we were taught a bit of Prolog just as an introduction to “alternative concepts.” One of the assignments was to implement a lexical scanner in prolog. One of the students at one point raised their hand and said something along the lines of:
“Excuse me, sir. I’ve been trying to wrap my head around this language for a week now and I still have no clear idea of what to do next to finish the assignment. I spend a lot of time sitting at my computer thinking about what to do, and most of the time I just guess.”
The prof replied, laughing, with “sometimes I guess too.”
That’s why programming is still mostly procedural
“hell, a moron can learn the basic syntax of C++ for procedural coding in 2 hours, but to really learn the power of C++, you need much longer if you are coming from C.”
Well then, I hope you enjoy your future in *plumbing*.
😉
A not very good article. Wholly bad perhaps.
It does not even have the energy to start a jolly good flamewar about OOP (which I believe has, in its application, tons of flaws – just not the ones the author states), or a good discussion on what makes scripting so popular. (Hint: it is not because scripting languages are “easy” as another comment would have it. Zero build times, way lower bug rates, higher level facilities should all appear in the “because” part)
Programming practice does have miles to go… too bad this article fails to raise to the ambitions of its title.