What is the future of programming? I retain a romantic belief in the potential of scientific revolution. To recite one example, the invention of Calculus provided a revolutionary language for the development of Physics. I believe that there is a “Calculus of programming” waiting to be discovered, which will analogously revolutionize the way we program. Notation does matter.” More here.
I’m more in the opinion that the revolution in computing is about how we manage and share code. How we work as teams and how productive we can be. Current technology for interacting with code and managing it is still in the stone ages. The future requires a system as universal, and flexible for code as the world wide web is for the Internet.
I have drafted up an idea (but nothing more) about a possible innovation in the way we interact with our source code. Whilst this is not a total revolution (a stupid claim for anybody). It can help us rethink our current situation.
http://camendesign.com/kroc/stuff/ooide
Apologies for spam, but here’s an actual possible idea, rather than just an article theorising. We could all sit around theorizing all day if we wanted to.
Having read your website there are certainly some points you got right but mostly I’d say your barking up the wrong tree.
Fact is: you may be able to convert between
if(…){
and
if(…)
{
But every programmer develops his own idioms which even differ from language to language. What you get when you try to translate them automatically is comparable to translating text with altavista:
The result is ugly and un-idiomatic.
I have some pretty severe idioms myself. One of the reasons for coming up with this idea. Text parsing is useless for understanding these idioms. Most large projects like to lay down strict style guidelines too. With the code stored as object orientated classes representing the logic, it’s easy enough to display the code back in just about any layout imaginable.
The user would be able to customize the way the code is displayed to cater for their own idioms, and if it doesn’t cater for them, then they could implement those styles of display themselves using the IDE extensibility.
But more importantly, because the code would not be stored as text, no idioms are enforced on others. You see the code in the style you like, no matter who wrote that code. No system is perfect, the benefits of OO storage would have to outweigh any drawbacks, that I think, with input from everybody, is possible.
Hey, I had the same idea – it’s very interesting, especially in regards to i18n and image imbedding. I’m afraid that syntax/localized keywords, commands, variables could end up badly – it’d be harder to communicate between programmers, etc. However, maybe there could be chat clients that handle the conversion.
Always pretty printing the code as you like it could be valuable – not actually changing the syntax drastically, but changing the presentation in general could be practical.
You did have an idea I haven’t thought of before – variable frequency of comments (pro to beginner). What I had imagined is the comments (including images, doodles, etc) completely seperate from the code, yet attached to it. Not quite in a sidebar, more like along the code yet not part of it. It’s hard to describe…
Something like a caption box or a tooltip?
Not really. The problem with having a seperate sidebar is that it wastes screen space. The problem with inline comments is it spaces out code vertically. My solution has all comments to the right of the code, within the same text box, yet it doesn’t interfere when writing code. It just serves as commentary.
For example, if you press return after a line with a comment, the comment remains on the line and the cursor moves onto the next, perhaps auto-indenting.
I haven’t really thouroughly thought it through.
I don’t see the problem, but then I find programming very, very enjoyable, even relaxing. Depends on the program, some types can be exhausting, such as halftones or compression. Maybe it’s all in how interested in the subject matter and whose idea are you breathing life into.
“We tend to build code by combining and altering existing pieces of code. Copy & paste is ubiquitous, despite universal condemnation. In terms of clues, this is a smoking gun. I propose to decriminalize copy & paste, and to even elevate it into the central mechanism of programming.”
I’m sure he’ll burn in hell for these sentences 😉
Weren’t modules and OO the modern and WAY less dangerous way of copy&paste?
He’s probably talking about code reuse which is still far from perfect but again, no reason to return to copy&paste.
I for one think we should have a close look at mathematics if we want to know where programming is heading/should head.
If there is to be any revolution in programming it would probably be based on some mix of psychology and some branch of mathematics.
As Paul Graham noted, programming also deals with human weaknesses (he compared it to designing chairs) not always with our most beautiful features…
Looking at mathematics again I would dare to guess that things like $var will become optional or obsolete.
You CAN indicate a vector in mathematics/physics by putting an arrow over it but pros use it only if they don’t want to confuse noobs.
Speaking of programming it severely limits the use of your code. So do things like add_int or add_float because they essentially prevent you from writing general programs.
Imho overloading (both function and operator) is VERY important.
Also “making simple things simple” can REALLY bite you. Think of Perl auto-flattening lists or languages that require a line continuation character *puke*
In my opinion you should always plan for complicated cases. It won’t kill anybody if he has to type one more line to print “hello world” but it WILL drive you nuts if you try to build complex, nested data structures in a language that does not want you to do that *cough*Perl*cough*.
Note that I’m also bitching about Python (although mandatory whitespace will probably not get as bad as line continuation chars) and Ruby (Sigils) 😉
One last thing:
dynamic typing ablsolutely sucks, imo.
You have to declare the damn variable only once but there are hundreds of chances to get a typo in its name in which case dynamic languages may think:
Cool, a new var! This sort of bugs can be hard to find.
Oh, and I also think it’s a REALLY bad idea to try and turn C into a language it was never meant to be instead of designing a new language from the ground up.
Nuff said.
Edited 2006-09-03 13:55
You have to declare the damn variable only once but there are hundreds of chances to get a typo in its name in which case dynamic languages may think:
Cool, a new var! This sort of bugs can be hard to find
That has nothing to do with dynamic typing. Rather, it has to do with how a language handles variable bindings*. In C, bindings are created in three places: in argument lists, in variable declarations, and in looping constructs. In Python, there are no variable declarations, so bindings are created in assignment statements as well. There are languages more sane than Python (notably Scheme, Lisp, and Dylan), that will not create bindings in assignment statements, or at least warn you (at compile time), if you try to. In these languages, bindings are introduced in ‘let’ statements, which take the place of variable declarations in C.
To use some examples (assuming that ‘b’ doesn’t already exist):
In C, the following code will fail:
int a = 0;
b = 4;
In Python, the following code will succeed:
a = 0
b = 4
In Dylan, the following code will fail:
let a = 0;
b := 4;
In Lisp, the following code will issue a warning:
(let ((a 0))
(setf b 4))
The Lisp code warrents some explanation. Lisp doesn’t create lexical bindings on the fly. It does, however, create dynamic bindings** on the fly. The SETQ operator, when confronted with a variable that is unbound, will default to creating a dynamic binding instead. Thus, ‘b’ in the statement above is fundementally different than the ‘b’ in the other three examples. Wheras ‘a’ is a simple lexical variable, and only visible inside of the LET body, ‘b’ will be visible to all code after its creation, as a global variable. The compiler warns you about this behavior, because while some programming gods can make good use of this feature, most mere mortals cannot.
*) “Binding” refers to the mapping between a name and the location in memory corresponding to a “variable”. In other words, the binding is an entry in the symbol table.
**) “Dynamic binding” is an unusual concept for C programmers, because C (and Java and Python) don’t have dynamic variables. In Lisp, a dynamic binding is like an uber global variable. It’s visible in all scopes after the point in which it is created.
What’s the difference, then, between dynamic typing and inferenced static typing?
a = 0
b = 4
will not fail in Boo. Yet it’s still static typing.
There is no relation between dynamic/static typing and binding. The two are conflated because most statically-typed languages have “variable declarations” that serve the dual-role of creating a binding and assigning a type to a variable.
Your example actually just proves the point. You’ve shown an example of a statically typed language that creates bindings in assignments. We’ve previously seen examples of statically typed languages that don’t create bindings in assignments (C), dynamically typed languages that don’t (Dylan), and dynamically typed languages that do (Python), thus establishing the orthogonality of typing and binding creation.
To address the difference between dynamically-typed and statically-typed languages, it helps to consider the fully-general conception of a “variable”. A variable is composed of three things: an entry in a symbol table, which points to a reference to an object, which points to an object. In practice, this chain is collapsed by the compiler, but that’s behind the scenes stuff. In all strongly-typed languages, the object has a specific type. In statically-typed languages, the reference, too, has a specific type. A reference can only point to objects of the proper type. In dynamically-typed languages, a reference can point to objects of any type.
Now, where some languages make things interesting is what happens if you do:
b = 3.0;
In Dylan, this is perfectly legal, and has the function of setting the object pointed to by the reference pointed to by the binding ‘b’ to the object 3.0. In Boo, this may be either illegal (as in C), or legal, but for a different reason. If its legal in Boo, the line has the effect of changing the binding ‘b’ to point to a different reference, one which can point to an object of type ‘float’.
What helps is to consider what happens when there is no binding directly to a variable, as in a structure, since this gets rid of the confusing aspect of changing the binding rather than the reference. A field in a structure has no entry in the symbol table, and hence no binding. If the following lines:
a.q = 1;
q.q = 3.0;
are legal, then you’re dealing with a dynamically-typed language. If they’re not, its a statically-typed language.
Edited 2006-09-03 20:46
Actually, that’s legal in boo – numbers are rather flexible.
I do understand the actual difference between inferenced and dynamic, I just don’t see what advantage the loss in speed gives you. I just see the disadvantage of reduced error checking.
Perhaps this is just the way I program, but I have never found a situation where dynamic typing would produce a more elegant solution than static typing.
Regarding the “flexibility” of numbers. Likely, it has nothing to do with flexibility in number types, but rather the rebinding I alluded to earlier.
As for the utility of dynamic typing — do you see the utility in OOP? Because dynamic typing is just an extension of OO’s polymorphism all the way down to the core of the language.
More generally, dynamic typing is nice because it both allows extensive use of OO programming idioms, and additionally allows for new modes of programming. In Lisp, it allows an evolutionary development model. You start with a simple prototype that does the basic core of what the program should do, then instead of throwing away this prototype, as you would in C, you evolve it into the final product. Statically-typed languages make evolutionary programming unnecessarily complicated. For example, consider the fact that in most programs, much more code passes an object around than actually uses it. In a statically typed language, when the type of that object changes, all the code that passes that object around has to change, even though that code doesn’t really care what the type of the object is. In practice, this means that programs in statically typed OOP languages tend to have overly baroque type hierarchies, because its a pain to create new abstractions in existing code.
Let’s use an example. Say I’m writing a text editor. In my prototype code, I represent a pointer into the text as an integer offset from the beginning of the buffer. Later, I realize that’s not general enough, and want to change it to a full-blown class, encoding a line number and offset in the line. In a statically typed language, changing a basic type like that can be a huge pain, unless you use some fragile “refactoring” editor hack. To avoid that, programmers tend to start with a fully-general class from the beginning. Often, this generality ends up never being needed, and the code’s readability suffers as a result. You see this in Java code all the time. In a dynamically-typed language, if you need an integer, you use an integer. If later you need a full class, changing it will be easy.
Actually, boo just rounds the float to an int type.
Anyway, thanks for that summary of the benefits of dynamic typing. In boo refactoring pain is eased dramatically due to many of the variable types being inferenced, however, I can see some of the benefit. I think inferencing is a good trade off.
As for passing an object around without caring what type it is, most OO languages have a root object class. Any object can be abstractly moved about.
I don’t usually throw away prototype code, I haven’t often needed to – and in the times that I have, its a major architectural issue unrelated to typing.
And no, I don’t do the uneccessary class thing. Although I am guilty of making every field a property (accessor methods). I wish .net had unified fields and properties as they are the same syntactically.
I’ll look more at dynamic stuff.
A root object class is just a form of dynamic typing, as are any polymorphic extensions to otherwise statically typed languages. A language like Java is, in practice, quite dynamic in its typing, since object references don’t have to point to an object of a specific class, but may point to an object of any subclass of that class.
And you’re right that type inferencing makes static typing a lot easier to deal with, though I still prefer soft-typed dynamic languages (with optional type declarations and a good type-inferencing compiler). Though it should be pointed out that a language with type inferencing and extensive polymorphism is closer in the design space to a soft-typed dynamic language aren’t that far apart in the design space, though each starts from a different end of the spectrum.
Your comment about combining psychology and mathematics hit the nail on the head. Which is actually what I think the article was driving at. Although, since I’m no mathematician, I can’t really envision what kind of direct impact mathematics could have… I could envision for example searching through codebases and statistically identifying common programming paradigms, then coming up with new types of shorthand notation (i.e. in the same way “for” and “while” are shortcuts for writing conditional loop commands by hand, a statistical analysis of code could allow us to find more areas in which new shortcuts would help). As far as the psychology goes, that’s where I think computing is growing up right now in general, i.e. identifying ways to shorten learning curves for all aspects of computer usage. Right now it’s most visible outside of the “coding” spectrum, in the field of UI design, but of course IDE’s use UI’s too, so I think they’ll be seeing changes along with everything else.
Despite the naysayers, I agree with the author that programming truly can be improved to the point where more and more can be programmed by non-programmers. And despite the naysayers again, I’ll risk the flames by saying I think Microsoft is, and has always been, on the cutting edge of this. Basic, and now Visual Basic has always been THE programming language for non-programmers, and it continues to evolve to incorporate new developments found in other programming languages. Unfortunately, especially since the first .NET version, the syntax and keywords have become more and more convoluted to accompany the added features, to the point where it degrades the simplicity that made Basic so wonderful in the first place. Microsoft really needs to get rid of all the long-ass keywords and types… Aside from that, the major problem with Visual Basic is that it’s always been considered too slow and unreliable for large projects, and that’s probably a justified opinion.
But I think it’s headed in the right direction, and as of now Microsoft probably has the best lead in bringing programming to the masses…
Of course, what would be even more important and desirable than a MS-dominated beginner’s product line is if we could get programming integrated into education. Imagine what a world we could have with that many more programmers able to contribute to open-source programs! Imagine how much faster software development, and even the development of software-development environments, would take place with that many more people capable of doing it! This is perhaps much more important than attempting to make a new programming paradigm or inventing a new, super-easy to learn “Esperanto” of programming lanuages.
“Copy & paste” programming has been obsolete since the invention of the external FORTRAN subroutine allowed one to create an external function library for everyone to use, and that was done in the early 1960’s.
(I’ve worked on such systems written in older FORTRAN, and they have some of the reusability advantages of OO languages without much of the synctactical complexity.)
Edited 2006-09-03 18:04
> dynamic typing ablsolutely sucks, imo. You have to declare the damn
> variable only once
Yes and no, depending on the context. On the one hand, it annoys
me that in Python (AFAIK) there is no way to declare variables that
are members of a class (except as comments).
On the other hand, in C/C++ you have to declare the type of any local
variable, even though its type could easily be inferred. This is
especially annoying in “for” loops using iterators.
Note that C/C++ can infer types of an expression, for example, if “a”
and “b” are integers, so is “a+b”. You can write “(a+b)*(a+b)”
instead of “int c=a+b; int d=c*c;”. So you can have explicitly typed
named variables, or implicitly typed unnamed expressions, but nothing
in between.
> but there are hundreds of chances to get a typo in its name in which
> case dynamic languages may think: Cool, a new var! This sort of bugs
> can be hard to find.
What about making “a = 2” only work for creating new local bindings
and result in an error if “a” has been declared before. To modify “a”
you’d have to write “a := 2”, and this would result in an error if “a”
hasn’t been “declared” before with “a = sth” (similarly to “+=”, “-=”,
etc.)
I think this would encourage people to give different names to
different objects, and result in fewer bugs. Too many times have I
seen the same variable reused as an attempt at “optimization”.
Inferno does like this, except it is the opposite:
a := x; declare a of the same type as x with the same value.
a = x; assign x to a which must already be assigned.
I think that there are less declaration than assignment so it makes sense to use the more verbose construct ‘:=’ for declaration and the more concise construct ‘=’ for assignment.
I’m sure he’ll burn in hell for these sentences 😉
It didn’t made sense to me too until I saw the demo at
http://subtextual.org/demo1.html
Weren’t modules and OO the modern and WAY less dangerous way of copy&paste?
The way copy&paste is used in the demo is pretty much like using modules in OO.
But if you find one, good one you…
The future of programming is quite easy to foresee. No need to think in fancy new paradigms or languages.
Everything, from database to physical simulation, will be written in C++. Or something like that, because nobody will use the original language anymore. All will be templatised: pointers, iterators, functions, everyting!
The programs will be full of fantastic implicit mechanisms hidden to the view of the programmer, only known to the original authors of each class and template. Only step-by-step tracing and extensive use of MS programming tools will bring underlying structures to the surface. Maybe some EMACS wizardry will do it, too.
Compile times will be similar to those in the 70s/80s, with lots of time for earing the radio or eating cookies. At this point nobody will look at compiler errors anymore, as they became impossible to understand, so compiler programmers will just remove them and put numeric codes instead just for retro feeling
At this point only hackers will program for the desktop /console market, while the rest work on embedded systems or make web pages for a living.
Code improvisation will become a “method”. The meaning of “spaguetty code” will be forgotten, as all the programs will be a mess by default.
Oficially all code will be OO, but for some strange reason it will look like an strange mix of C, LISP, Smalltalk, and even Prolog and FORTRAN.
An utopia, don’t you think?
You mean, like Lambda Calculus?
Indeed, much of the article smacks of “I should be using Lisp”.
Usability should be the central concern in the design of programming languages and tools. We need to apply everything we know about human perception and cognition. It is a matter of “Cognitive Ergonomics”.
It’d be nice to see a formal application of this to a programming language design (and its APIs!). Lisp goes further in this regard than any other language I’ve used, in that its creators went to great lengths to try to figure out how the programmer would want to use the language, and designed the language to behave as expected. As a result, Lisp’s APIs behave “the right way” (ie: in the way I expect them to behave) far more often than in any other language I’ve encountered.
Notation matters. We need to stop wasting our effort juggling unsuitable notations, and instead invent representations that align with the mental models we naturally use.
The semantics of the notation matters, the syntax the notation, not so much. One of the beauties of Lisp is its macro-programming facilities, which allow you to create domain-specific-languages (DSLs) on top of the basic language. This directly addresses the “translation between mental models” aspect the author refers to earlier. In a DSL, the translation is minimized, because the semantics of the notation are equivalent to the semantics of concepts in the problem domain.
To recite one example, the invention of Calculus provided a revolutionary language for the development of Physics. I believe that there is a “Calculus of programming” waiting to be discovered, which will analogously revolutionize the way we program. Notation does matter.
The revolution with calculus wasn’t about notation, it was about semantics. You can write the derivative operator as dx/dy, x’, or (diff x y), but all have the same semantics. It’s the semantics that give calculus its power — the notation is arbitrary. The notation a mathematician uses in Mathematica is completely different from the notation they use when writing by hand, but the two are completely equivalent in terms of semantics.
it is absurd that programs are still just digitized card decks, and programming is done with simulations of keypunch machines.
And mathematics is still done using symbols in 2D space, as we’ve been doing for thousands of years. The notation is not the bottleneck here…
The dominant programming languages all adopt the machine model of a program counter and a global memory.
Many languages are based directly on the lambda calculus instead.
Our languages are lobotomized into static and dynamic parts: compile-time and run-time.
In Lisp, the phases still exist, but the lines are blurred significantly.
There should be no difference between run-time and edit-time.
Yes! As in Lisp, where programs are basically written while they are running, in the REPL.
Now, I don’t want to sound like I’m saying that Lisp is some silver bullet, because its not. It certainly doesn’t address all of the ideas that the author mentions, especially the ones about higher-level interfaces to specific tasks. However, I think the author’s quest would be best served by starting from Lisp and going from there, or at least studying it at length and learning what it does and does not offer. A lot of the more basic aspects of the author’s ideas have already been implemented in Lisp. Even the “freeing code from the limitations of text” was touched upon in Apple’s Dylan (a Lisp derivative) IDE. Moreover, the language’s high-level of dynamicity makes it highly suitable for interfacing with higher-level frameworks. Consider Kenny Tilton’s ‘Cells’ framework, which brings the spreadsheet model of computation (which is highy suitable for accounting and some engineering tasks), into Lisp.
Is it possible to define infix operators on top of Lisp?
I often hear people saying “But it’s just syntactic sugar”. Right, it absolutely is. But this sugar can make a hell of a difference.
Sometimes I rewrite an equation differently in order to understand it:
throw out as many brackets as possible, use shorter variable names and only symbols I like (say alpha instead of beta).
I really think one should not underestimate syntactic sugar. So my question: can Lisp do it?
You can do a lot more than defining infix operators on top of Lisp. The Lisp macro system is fully procedural, meaning the input to a macro body can look like pretty much whatever you want it to look like, within the bounds of certain keyword and punctuation constraints in the parser.
The weakness of Lisp in this regard is that its mainly the semantics that are truely programmable . The language helps you mutate the semantics of the language to fit your needs. It’ll let you (largely) mutate the syntax of the language, but it won’t help you do so. At the limit case, you could define a DSL that had a syntax nothing like Lisp, but then you’d basically have a parser in your macro to parse that language. Writing just a parser is better than writing a whole language implementation, which is what you’d have to do in most any other language, but its clear that Lisp’s level of support for syntax programming is not at the level of something like what PERL6 is supposed to have.
That said, I don’t see the point of fancy syntax, in that fancy syntax gets in the way of programmable semantics. In other words, I’d rather have a bare-syntax like Lisp, plus the power of a fully-procedural macro system, than a rich-syntax without the power of procedural macros. To date, no language has been able to encompass both together, and its not even clear that its possible.
Edited 2006-09-03 18:54
Thanks for your reply.
My point is that I’m mainly writing programs for scientific computation (containing lots of vectors, matrices, …) and for this domain every language without (overloadable) infix operators makes things much more difficult.
It’s been a long time since I last tried Lisp but if I remember correctly the equation
x=y*b^c + x
would read roughly like this in Lisp
(set x (+ (*y (^b c)) x))
Now this was a SIMPLE equation…
See my problem?
I really do believe theres a reason infix notation is used in math…
>It’s been a long time since I last tried Lisp but if I >remember correctly the equation
>x=y*b^c + x
>would read roughly like this in Lisp
>(set x (+ (*y (^b c)) x))
For this specific problems I think you can define a macro that permits you to write
(infix-expr x = y * b ^ (a-function c d) + x)
rather easily.
Isn’t that what Perl 6 is supposed to have? Lisp-like macros (and also C-like textual substitution macros), but still the more familiar C-like syntax, including prefix, postfix, and infix operators, including operator overloading.
Perl6 is supposed to have a fully programmable syntax. If works, that’d be a significant advancement in language design indeed. Though it’ll be some years yet before Perl6 has a mature implementation, much less a body of practical experience that can judge whether it works or not.
That said, I don’t see the point of fancy syntax, in that fancy syntax gets in the way of programmable semantics. In other words, I’d rather have a bare-syntax like Lisp, plus the power of a fully-procedural macro system, than a rich-syntax without the power of procedural macros. To date, no language has been able to encompass both together, and its not even clear that its possible.
What about O’Caml and camlp4? You can have fancy syntax, and replace it, extend it, etc. Basically, it manipulates the ocaml AST.
Let me revise that. “To date, no language has been able to encompass both together, in a useful way, and its not even clear that its possible.”
Camlp4 has the right features (in particular it lets you call procedures in the expansion of a macro), but almost nobody actually uses it. It’s enormously complicated, and exposes the details of parsing the Ocaml AST to the programmer. You have to deal with operator precedence, the types of various AST nodes, etc. Heck, until recently, it required the use of a special “revised” Ocaml syntax in parts of the macro, presumably because it has too hard to deal with the regular syntax. Not to mention the fact that its not well-integrated with the environment, as it seperates macro-expansion into a pre-compile phase.
In comparison, Lisp macros are very simple, even though they do in effect manipulate the AST (it’s just that the Lisp AST is almost trivial). Also, Lisp macro code looks just like regular Lisp code, and Lisp macros are deeply integrated into the compiler, so macro-expansion is available at any time (even at runtime). All this makes Lisp macros much more accessible, even for relatively novice programmers, and thus much more useful.
There is some work in getting the simplicity and generality of Lisp macros in an infix language. Jonathan Bachrach wrote a paper on “Dexprs”, which was a proposal to extend Lisp-like macro facillities to Dylan (which uses an infix syntax). The paper pointed out the futility of trying to do infix macros at the AST level, and proposed doing them at the SST level. In other words, macros operate on the basic “shapes” of constructs in the language, rather than on AST nodes. Of course, it helps that the Dylan syntax is very simple and regular, so the language actually contains very few “shapes” in the source code. This idea has some promise, and there is an implementation of it within the Open Dylan compiler (where it is used extensively), but there is no use of it outside of that project (the compiler doesn’t expose the feature to user-code, as its a non-standard extension), so there is little consensus about how it works in practice.
“The revolution with calculus wasn’t about notation, it was about semantics. You can write the derivative operator as dx/dy, x’, or (diff x y), but all have the same semantics. It’s the semantics that give calculus its power — the notation is arbitrary. The notation a mathematician uses in Mathematica is completely different from the notation they use when writing by hand, but the two are completely equivalent in terms of semantics.”
Actually, notation was at the heart of Calculus. Leibnitz was concerned with making mathematical reasoning amenable to mechanical computation. He had previously devised a mechanical computer and spent a disproportionate amount of time developing a notation for Calculus that would allow his computer to be “programmed” easily. There is a reason that differentation in Leibntiz notation is so mechanical: it was designed to be. While the notation may be arbitrary, a better notation makes the underlying power and semantics far more accessible.
It’s interesting to see that the author has identified the very same problems certain infamous E. W. Dijkstra has some thirty years ago. However, that is as much as is common with the author and Dijkstra.
Dijkstra wrote a lot about the subjects of how programming and computers are intellectually the most complex things man has ever invented and how to grapple the problem of programming, among other things. Your local library might have his A Discipline of Programming, but in case it doesn’t, these should get you started:
http://www.cs.utexas.edu/users/EWD/
He advocated the use of mathematics and predicate calculus to derive programs from the specification, similar to how mathematical theories are derived from simpler lemmas and axioms. Not only does this make it easier to create the program, you can also show that your program is correct — for free.
Now, the reason why his method is not popular is that people, for some reason, think mathematics is too difficult or don’t even like it at all. Some accused Dijkstra that he made programming more difficult by mixing in mathematics… Suit yourselves, but I can see the benefit in his method.
While reading the article, I noticed that the author probably has little or no experience with functional or declarative programming. Another OSNews reader already commented about Lisp above, and I have little to add to that.
“We are constantly translating between different representations in our head, mentally compiling them into code, and reverse-engineering them back out of the code. This is wasted overhead.”
The reason why programmers think in terms of machines is because that’s the only way to do it. A computer is a machine. Programmers must engineer machines. That’s their job. You can abstract the language so you aren’t thinking in terms of machines, but that requires unnecessary extra time and space for the program. The goal of engineering is to work with what you have in the most efficient way possible.
I think the author is hoping for a paradigm-shift where languages leave the physical world of number-crunching machines and into the ideal world of pure, mathematical thoughts. Programming languages are just that: languages. Languages are the bridge between the world of thoughts and the physical world. Trying to escape the physical world is not only totally infeasable, it’s impossible for humans to do.
i think the Fortress language will be pretty close to have a mechanism close enough to be called “universal” because it’s syntax can be rendered directly pretty similar to mathematical or psuedo algorithm code. therefore, it may be well accepted by many. tha language itself also growable allwing domain spesific languages to be plugged easily later. and unlike many dynamic languages, it is build for performance..
worth checking.
http://research.sun.com/projects/plrg/
http://research.sun.com/projects/plrg/fortress.pdf
The most famous one is the Lambda calculus, implemented in ML, Haskell and other languages.
There is also the PI calculus, in experimental state.
The reasons that programming is still hard work is not so much due to wrong programming languages (it is not that the current crop of programming languages is without shortcomings) but due to the monolithic programming model that is adopted.
The monolithic programming model is the model where each program is an executable which must run as a separate process, and nobody beyond itself knows how it behaves.
The solution is a programming model as it was discussed previously on this site:
http://www.osnews.com/comment.php?news_id=15643
In Star Trek, the computing environment is also the programming environment: functions can be modified while the program runs, new queries can be displayed on the screen with ease, and programming can be accomplished by touching and rewiring modules on the screen (there is an episode where that is portraited in TNG).
If it all sounds like LISP, it’s because it is. LISP of course has its own problems and shortcomings (no syntax, too many parentheses, fragmented etc), but it is a pity that CS started with LISP and it has to take so many steps just to go back to it, after all.