Linked by Eugenia Loli on Wed 12th Dec 2007 05:47 UTC, submitted by LinucksGirl
General Development Lazy programming is a general concept of delaying the processing of a function or request until the results are needed. Thinking in terms of lazy programming can help you rid your code of unneeded computation and restructure programs to be more problem-oriented.
Order by: Score:
They don't teach this in school anymore?
by eggs on Wed 12th Dec 2007 06:08 UTC
eggs
Member since:
2006-01-23

Its very useful, I use it a lot for properties on objects (yes I'm a .net programmer don't kill me) where data needs to be fetched from the database as the data isn't needed everytime the class is instantiated. The secret isn't to do it to everything (you don't want to hammer the database and the overhead on getting the data may cost more than the data). I usually do it for big things like binaries in database. Specificly I used to maintain a web application that stored word documents in the database. Now a word file wasn't all that big, but if I were to get a colletion of, lets say, all the articles in the database it would take quite a while to get all the data, so I used lazy loading on the file blog:

private byte[] fileBlob;
public byte[] FileBlob
{
get
{
if (fileBlob == null) fileBlog = GetFileBlob();

return fileblob;
}
}

Also, I have my default view set to threaded and I can't make a post if there aren't other comments already. This is very annoying as I have to set my default to flat everytime I post to an article with no comments yet.

Edited 2007-12-12 06:10

Reply Score: 2

CaptainPinko Member since:
2005-07-21

I don't know about .Net but I believe Hibernate (/EJB3?) does this for you in Java.

Reply Score: 1

g2devi Member since:
2005-07-09

It's taught (at least 10 years ago) in a more advanced form under the topic of Memoization.

But it's not really emphasized and you won't generally lose marks for inefficient designs as long as you can prove your program correct and (sometimes) validate that the O(f(x)) is of polynomial order.

Unfortunately, these days, computers are faster with more memory and garbage collection (not keeping track of resources) is cheap and caching (whether it helps or hinders you) is automatic, so lazy programming and efficiency seems to be an after-thought. Efficiency is only important if you can't just throw more resources at the problem and you can't convince your users to put up with the slow and resource hungry nature of your app and you can't find some sort of 3rd party cache that covers up the issue properly.

Thankfully, there is still good coding out there, especially in the Linux world where my creaky fully loaded 8 year old server/desktop still outperforms Windows XP on a modern desktop. But such good coding is something you learn from masters outside of school (even 10 years ago), and most people unfortunately don't have the benefits of working with one, so these sorts of articles are definitely welcome.

Reply Score: 2

jayson.knight Member since:
2005-07-06

"public byte[] FileBlob
{
get
{
"

You're breaking a cardinal rule of .Net development by returning an array from a property: http://www.gotdotnet.com/Team/FxCop/Docs/Rules/Performance/Properti...

Of course since it's an array of bytes, you'll always get a copy of the array, but even if it were an array of reference types a copy will still be returned. I'm sure you can see the performance ramifications.

Reply Score: 2

No mention of Haskell
by cdutton on Wed 12th Dec 2007 06:51 UTC
cdutton
Member since:
2005-07-24

Perhaps it's because it's just too easy in Haskell, but I find it odd that the author makes no mention of a language that does lazy evaluation with no extra hoop jumping required.

Reply Score: 1

RE: No mention of Haskell
by grettke on Wed 12th Dec 2007 15:05 UTC in reply to "No mention of Haskell"
grettke Member since:
2007-12-12

The author talks about Haskell at the end of the article.

Reply Score: 1

RE[2]: No mention of Haskell
by null_pointer_us on Wed 12th Dec 2007 23:51 UTC in reply to "RE: No mention of Haskell"
null_pointer_us Member since:
2005-08-19

Maybe he used lazy evaluation when reading the article.

:D

Reply Score: 1

too lazy
by drkwolf on Wed 12th Dec 2007 07:22 UTC
drkwolf
Member since:
2007-05-18

wake_me_up_when_its_done();

Reply Score: 2

Dima Member since:
2006-04-06

Lazy Editor? ;)

Reply Score: 2

Three techniques of a programmer
by eosp on Wed 12th Dec 2007 14:19 UTC
eosp
Member since:
2005-07-07

Everything comes down to hashing, caching, or adding a level of indirection?

Reply Score: 2

perfect for me, lazy programming for...
by Tuishimi on Wed 12th Dec 2007 15:21 UTC
Tuishimi
Member since:
2005-07-06

...a lazy developer. ;) (Yeah yeah, I am kidding.)

Reply Score: 2

japh Member since:
2005-11-11

Well, according to the man who created a horrible^H^H^H^H^H^H^H^H popular programming language, laziness is a good thing.

http://c2.com/cgi/wiki?LazinessImpatienceHubris

Reply Score: 1

Lazyness in Haskell
by tuttle on Wed 12th Dec 2007 16:05 UTC
tuttle
Member since:
2006-03-01

Lazyness is much safer in fully functional languages such as haskell because of referntial transparency. The value of a function depends only on the parameters, and on nothing else.

In imperative languages features like closures and lazyness lead to difficult to find errors. Functions with side-effects are generally not safe to make lazy.

So just trying to copy features from functional languages like scheme to imperative languages like java will not work.

You can put lipstick on a pig, but it's still a pig.

Reply Score: 3

RE: Lazyness in Haskell
by RandomGuy on Wed 12th Dec 2007 20:40 UTC in reply to "Lazyness in Haskell"
RandomGuy Member since:
2006-07-30

I don't mean to offend you, I've only added emphasis to differentiate the main sentences from the ones putting them into context. Sorry for the long post.

Those functional programming constructs don't mix too well with an imperative programming style, that's true.

On the other hand, the mere fact that a programming language doesn't _enforce_ referential transparency doesn't mean you _have_ to break it.
It's just an option. Forbidding state change alltogether is throwing out the baby with the bathwater.

Everything else being equal, a programming language that allows you to break referential transparency as he sees fit is inevitably more expressive than one which doesn't.
(simply because the second is a subset of the first)

Of course, there are other things to take into consideration. If, for example, a library that you intend to use is not referentially transparent, your whole program is not. Therefore, if possible, libraries should be r.t.

It should also be obvious if a function is r.t or not. The crude way to do this is to mangle the function name, for example by appending a bang (IIRC Scheme does this?). One could also use an IDE to check this and show destructive functions in a different font and/or color. Since R.T. is a recursive problem the IDE would 'only' need a set of basic functions that are known to be r.t.

Now, I can almost hear you asking:
"But why should I go to such lengths? I don't need to violate R.T. in the first place!"

First of all,
I think it's stupid to hardwire a restriction like "Thou shalt not violate the holy R.T.!" into a language's core.
As a coding standard/policy? Sure, why not.

In addition,
a lot of people think in terms of objects and state, that's just how our brains work. I think OO programming owes much if not all of its success to this fact. Few people would think of painting as a function taking a surface and a color as arguments and returning a new surface. They'd rather put it like surface=surface+paint.

Maybe I'm just not much of a functional language guy (programming is only one of my hobbies, anyway) but this week I was rewriting some parts of a python program in functional style for reusability's sake and found myself thinking "This feels wrong, _wrong_, WRONG!"

When the part in question was a sequence of statements, I could easily see what happened to the input as it passed through the program.
My first functional attempt was one hell of a (wrapped) run-on-line like "return f1(f2(foo, f3(f4(bar), f5(baz))), f6(z))", with f_i being some predefined functions. It was a lot shorter than the imperative part it replaced but it made a hell of a lot less sense, too.

So I tried to split it up into some bigger functions like
g1(x,y,z)=f1(f2(x,y),f6(z))
g2(x,y)=f3(f4(x),f5(y))
with the final line looking like
"return g1(foo,g2(bar,baz),z)"
However, g1 and g2 were no functions that made any intuitive sense _as_parts_. Either that or the functions required every variable and it's grandma to be passed as arguments.
Of course, the whole program worked, but the seperation of the functions felt unnatural, random and forced.

All in all this version of the program felt like a math proof that runs longer than 2 pages (i.e. just about every proof):
I could see why the parts worked. They just didn't make any sense and I sort of missed the big picture.
It was like:
"We want to prove A ==> E"
Ok.
"A ==> B"
Yes. I don't know what B is and what it does, though.
"(A and B) ==> C"
Aha, whatever 'B' and 'C' may be. But I cannot see mistake in your reasoning.
"(B and C) ==> D"
Ok, so from (I_don't_know_what_the_fsck_this_is_supposed_to_be)1/2 respectively, we can conclude D, whatever that may be. Continue...
"(B and C and D) ==> E"
Hey, right, we set out to prove E! I almost forgot that. Now tell me, what were B, C and D again? I know they follow from A but tell me, what are they supposed to mean?

So I guess my main point is:
(Most)
Humans think in terms of objects, not in terms of blackboxes performing a sequence of mathematically well defined transformations.

There are problems that are best expressed in an imperative programming style.

Many of us need objects to attach meaning to. An idea that's detached from everything else is hard to remember.


This is especially true if the necessary operations are rather simple but depend on a lot of variables and if their meaning can only be understood when you see all of them, what they're needed for and where they come from.

If your mind works well with functional languages, good for you.
Not everybody's brain does, though.

Reply Score: 3

RE[2]: Lazyness in Haskell
by tuttle on Thu 13th Dec 2007 00:49 UTC in reply to "RE: Lazyness in Haskell"
tuttle Member since:
2006-03-01

Everything else being equal, a programming language that allows you to break referential transparency as he sees fit is inevitably more expressive than one which doesn't.
(simply because the second is a subset of the first)


Depends on how you define expressive. Functional programs are in general much shorter than imperative language programs. That is mostly because of advanced language features such as closures and pattern matching. But these language features are made much easier to implement and to understand by referntial transparency.


First of all,
I think it's stupid to hardwire a restriction like "Thou shalt not violate the holy R.T.!" into a language's core.
As a coding standard/policy? Sure, why not.


If you build in R.T. in the language, there are a lot of optimizations and simplifications the compiler can make. You also need less language constructs.

For example:

-The whole distinction between passing arguments by reference or by value does not exist in a R.T. language.

-Memoization and lazyness are both extremely easy to implement in a R.T. language. In fact they can be built into the language (see haskell). In a non R.T. language the programmer has to do this himself.

-A closure in a R.T. language can be generated by copying the scope where it is created, while a closure in a non R.T. language has to keep a reference to a mutable object describing the state (that is the reason java anonymous classes can only access final members of the outer scope).

-And best of all: functional programs can be trivially parallelized.


So I guess my main point is:
(Most)
Humans think in terms of objects, not in terms of blackboxes performing a sequence of mathematically well defined transformations.


You can still think in terms of objects in a fpl. The only difference is that you don't deal with "the object", but with the state of the object at a certain point in time. In fact, I think having to distinguish between "the object" and "the current state of the object" makes many things easier.

Unfortunately the functional language community is mostly composed of computer scientists that live in a kind of ivory tower. And languages like haskell, while extremely powerful, are not exactly beginner-friendly.

Nevertheless, if you want to get a taste of functional programming, you should try haskell instead of python.



A typical example for something where you might think that the mutable object-oriented approach would be superior to the functional approach would be some kind of graphical editor (a text editor or a bitmap editor).

But as soon as you need features like undo or multithreading, the mutable approach gets complex very fast because mutable state is spread all over the place. In a functional program, the undo functionality is almost for free, and the multithreading is trivial.

By the way: could you post the code you tried to rewrite in functional style? I am sure it is possible to rewrite it in functional style without losing clarity.

Reply Score: 2

RE[3]: Lazyness in Haskell
by RandomGuy on Thu 13th Dec 2007 17:23 UTC in reply to "RE[2]: Lazyness in Haskell"
RandomGuy Member since:
2006-07-30

Depends on how you define expressive. Functional programs are in general much shorter than imperative language programs. That is mostly because of advanced language features such as closures and pattern matching. But these language features are made much easier to implement and to understand by referntial transparency.

That's why I said everything else being equal.
I'd define expressiveness concerning a certain task as some function that's strictly monotonically decreasing with the number 'n' of characters needed to solve the task but always >=0, say e(t)=1/n(t).
The overall expressiveness of the language could then be defined as E=p(t_0)*e(t_0)+...+p(t_n)*e(t_n)
with t_0, ..., t_n being the different possible tasks and p(t_i) being the probability of every task occuring.

For the tasks where functional solution is superior, the non-pure language could simply use it (remember, everything else being equal, so it's got the same constructs as the functional one). That means there just needs to be one corner case where the non-pure solution is shorter and the non-pure language automatically is (however slightly) more expressive.
But I'll admit it also makes it about a billion times easier to shoot yourself in the foot if you don't know what you're doing. :-)

Of course it makes writing a compiler a lot more difficult but what do I as a simple programmer care about the pain the compiler writer had to go through?
Regarding parallelization, I cannot see why it shouldn't be possible to implement this for a non pure language, as R.T seems to be easy to judge with a list of functions that are known to be R.T and some simple rules like rt?("if a then b else c")=rt?(a) and rt?(b) and rt?(c)

You can still think in terms of objects in a fpl. The only difference is that you don't deal with "the object", but with the state of the object at a certain point in time. In fact, I think having to distinguish between "the object" and "the current state of the object" makes many things easier.

Unfortunately the functional language community is mostly composed of computer scientists that live in a kind of ivory tower. And languages like haskell, while extremely powerful, are not exactly beginner-friendly.


But dealing with the state of an object instead of the object itself is confusing. It's like calling myself 'Ben_sleeping', 'Ben_working', 'Ben_learning' or 'Ben_eating' depending on my current activity.
If people want to know wether or not I'm sleeping, they can always phone or poke me, depending on distance ;-)
But I guess that's just a matter of personal taste.
I don't think a little change in state justifies a new name, besides I'm just no good at choosing or remembering names...

By the way: could you post the code you tried to rewrite in functional style? I am sure it is possible to rewrite it in functional style without losing clarity.

Sure I can, it was some program I wrote for a course at university called something like "digital image processing/manipulation".
It's currently pretty messy since I tried to make it mostly functional and gave up when I thought it was just getting worse and worse...
Be prepared to gouge your eyes out in despair ;-)
Oh, one more thing:
As you can see, I had to replace a tab with four '-'. Anyway, enjoy!

Hey, OSNews crew, why no code tags and just a puny 8000 characters? :-P

[i]
#!/usr/bin/env python
# -*- coding: UTF-8 -*-

def mprint(m, title):
----"""prints matrix 'm' neatly formatted with 'title' as, well, the title"""
----s=""
----for line in m:
--------for el in line:
------------s+="\t%i"%el
--------s+="\n"
----print "\t\t------%s------"%title
----print s

def scale(z):
----return lambda x: z*x

def vScale(z, v):
----"""scales vector 'v' with number 'z'"""
----return map(scale(z), v)

def mScale(z, m):
----"""scales matrix 'm' with number 'z'"""
----return [vScale(z, v) for v in m]

def inRect(point, corner):
----"""is 'point' in the rectangle defined by the corners [0,0] and 'edge'?"""
----if point[0]<0 or point[1]<0: return False
----if point[0]>=corner[0] or point[1]>=corner[1]: return False
----return True

def size(matrix):
----"""computes [n_x, n_y] with
----n_x=#lines of 'matrix'
----n_y=#columns of 'matrix'"""
----res=[0, 0]
----for line in matrix: res[0]+=1
----for el in matrix[0]: res[1]+=1
----return res

def helper1(small, big, offset, size_b):
----"""small: small matrix, which is added to matrix 'big'
----offset: position of 'small' relative to 'big' (delta_x, delta_y)
----size_b: size of 'big' (lines, columns)"""
----for x in range(size(small)[0]):
--------for y in range(size(small)[1]):
------------if inRect([x+offset[0], y+offset[1]], size_b):
----------------big[x+offset[0]][y+offset[1]]+=small[x][y]

def helper2(small, big, offset, size_b):
----"""matrix 'small' is shifted by 'offset', then every element in 'small' is multiplied with the element in 'big' that's at the same position.
----The sum of these products is returned"""
----res=0
----for x in range(size(small)[0]):
--------for y in range(size(small)[1]):
------------if inRect([x+offset[0], y+offset[1]], size_b):
----------------res+=small[x][y]*big[x+offset[0]][y+offset[1]]
----return res

def convolution1(image, kernel):
----"""standard definition of convolution
----example: [0, 1, 1, 0] (x) [0, 1, 1] = [0, 1, 2, 1]"""
----size_i=size(image)
----size_k=size(kernel)
----if size_k[0]%2==0 or size_k[1]%2==0: print "ERROR! The kernel must have an odd number of lines/columns!"
----offset=[-(x-1)/2 for x in size_k]
----res=[[0 for el in line] for line in image]
----for x in range(size_i[0]):
--------for y in range(size_i[1]):
------------helper1(mScale(image[x][y], kernel), res, [x+offset[0], y+offset[0]], size_i)
----return res

def convolution2(image, kernel):
----"""Less common definition of convolution,
----every point in the result is computed by averaging over the points in 'image' with the kernel as weight function
----example: [0, 1, 1, 0] (x) [0, 1, 1] = [1, 2, 1, 0]"""
----size_i=size(image)
----size_k=size(kernel)
----if size_k[0]%2==0 or size_k[1]%2==0: print "ERROR! The kernel must have an odd number of lines/columns!"
----offset=[-(x-1)/2 for x in size_k]
----res=[[0 for el in line] for line in image]
----for x in range(size_i[0]):
--------for y in range(size_i[1]):
------------res[x][y]=helper2(kernel, image, [x+offset[0], y+offset[1]], size_i)
----return res

"""------------------Remark------------------------
To make sure that we don't change the overall brightness of the image,
we would have to divide every element in the resulting image by the sum over the elements of the kernel.
(integer division)

This step has been omitted so that even the values that would otherwise be rounded down to zero can be seen.
Furthermore, this assures that no information is lost and we can do further computations on the image before finally rounding the values for display.

The image may have an arbitrary number of lines and columns but the kernel must have an odd number of lines and columns.
For a completely symmetric kernel, convolution1/2 lead to the same results.
If the kernel is not symmetric, they may give different results.
This is demonstrated in the following tests."""

print "-------------------------Test1-------------------------"

image = [[0, 0, 0, 0, 0, 0],
----[0, 3, 4, 3, 2, 0],
----[0, 5, 3, 1, 2, 0],
----[0, 4, 3, 2, 5, 0],
----[0, 1, 1, 2, 3, 0],
----[0, 0, 0, 0, 0, 0]]

kernel = [[1, 2, 1],
----[2, 4, 2],
----[1, 2, 1]]

mprint(image, title="Image")
mprint(kernel, title="Kernel")
mprint(convolution1(image, kernel), title="Convolution1")
mprint(convolution2(image, kernel), title="Convolution2")

print "------------------------Test2---------------------------"

kernel = [[1, 2, 3],
----[0, 1, 2],
----[0, 0, 1]]

mprint(image, title="Image")
mprint(kernel, title="Kernel")
mpri

Reply Score: 2

RE[4]: Lazyness in Haskell
by marcusoverhagen on Thu 13th Dec 2007 22:32 UTC in reply to "RE[3]: Lazyness in Haskell"
marcusoverhagen Member since:
2005-08-20

Hey, OSNews crew, why no code tags and just a puny 8000 characters?

Yes thats a problem, especially because you are unable to use indention as it gets stripped, but not always, see my comment below. very strange.

Edited 2007-12-13 22:35

Reply Score: 1

RE[4]: Lazyness in Haskell
by tuttle on Fri 14th Dec 2007 13:12 UTC in reply to "RE[3]: Lazyness in Haskell"
tuttle Member since:
2006-03-01

For the tasks where functional solution is superior, the non-pure language could simply use it (remember, everything else being equal, so it's got the same constructs as the functional one). That means there just needs to be one corner case where the non-pure solution is shorter and the non-pure language automatically is (however slightly) more expressive.


If we assume that there is no penalty for additional language complexity, this is of course true. But that does not say much about whether mutability is desirable. After all, having goto from inside one method to inside another would also increase expressiveness.

But I'll admit it also makes it about a billion times easier to shoot yourself in the foot if you don't know what you're doing. :-)


Glad we agree on that. The problem is that most of the time you do not notice bugs caused by lack of referential transparency.

There are for example many cases where people break encapsulation of objects by passing out a mutable reference via a property getter. This will not immediately lead to a bug, but it may cause very difficult to find problems in the future.

Of course it makes writing a compiler a lot more difficult but what do I as a simple programmer care about the pain the compiler writer had to go through?


If your language specification is too complex, you will have language features that are not supported by some compilers, or compilers that behave slightly differently in some cases. And you have the problem that every programmer uses a different subset of the language, so that libraries from different people look completely different.

That is exactly what happened in C++. In the beginning, almost no compiler supported the full C++ standard (including exceptions, rtti). That is why libraries like Qt needed to add their own layer on top (moc).

And if you use 3 C++ libraries from three different vendors, they will usually look so different that they do not even seem to be written in the same language. One will use template metaprogramming all over the place (Boost), another one will extend the object system in some way (Qt) etc.

IMHO, the K.I.S.S. principle also applies to languages.

I don't think a little change in state justifies a new name, besides I'm just no good at choosing or remembering names...


You are correct that a change of state does not justify a name change. However, that is no contradiction to referential transparency. A r.t. language could allow something like
x=x+1
or
ben=ben.Wakeup().EatBreakfast()
As long as the original copy still exists, r.t. is not violated. "ben" is in this case just a name that has a different value in different scopes. Names having different meanings in different scopes is nothing unusual for fpl.

Property setters are of course not allowed, but these can be substituted by functional setter methods:

ben.Sleeping=false //not r.t.
ben=ben.WithSleeping(false) //r.t.

Even having mutable local variables for loops would not violate r.t., as long as they can not escape the scope as a return value or in a closure.

Be prepared to gouge your eyes out in despair ;-)


There does not seem to be anything wrong with that code (except for the fact that the helper1 and helper2 methods could have a more descriptive name). I could understand what it does almost immediately.

The convolution2 would be the more functional approach: it constructs the new image one pixel at a time by reading from the old image and the convolution matrix. It would be much easier to parallelize since you do not need to synchronize any writes (each pixel in the target image is written exactly once, and the order does not matter).

But using python for image processing is a strange choice. It is not exactly fast for this kind of problem...

By the way:

If you are interested in functional languages, but still want to use the occasional imperative construct, you should check out scala.

It runs on the JVM and can use all java libraries. You can write r.t. code, but you can also use mutable state if you have to. It has a very powerful type system.

It is statically typed, but it has type-inference. So you usually do not have to write many type annotations.

http://www.scala-lang.org/

Reply Score: 2

RE[5]: Lazyness in Haskell
by RandomGuy on Fri 14th Dec 2007 15:41 UTC in reply to "RE[4]: Lazyness in Haskell"
RandomGuy Member since:
2006-07-30

After all, having goto from inside one method to inside another would also increase expressiveness.

I sure would and I'd argue that it should be possible to do this in the core language.You can always add more restrictions as libraries/extensions/code checkers later on. On the other hand, I'm fully aware that the reverse is true if we focus on social phenomena:
If you make a feature available, people _will_ use it, even if you warn them that it's deprecated, dangerous and might bring about the end of the universe as we know it ;-)

IMHO, the K.I.S.S. principle also applies to languages.

Yes, as long as you don't make the programmer ride a unicycle because a vehicle with four wheels would clearly be too complicated, figuratively speaking. I do believe in the water bed theory of complexity, to a certain extent. Not to the extent to which Larry Wall does, though.

While I agree that the complexities you mentioned would slow the uptake of the language initially because compilers would be lacking behind, I don't believe it would necessarily make the language significantly more complex to _use_ for the ordinary programmer. C++ is not a typical example of an imperative language, more of an example of a kitchensink language that has its fair share of needless syntactical complexities :-)

A r.t. language could allow something like
x=x+1

So did I get this right, "x=3; y=x;{x=x+1; print x}; print y;"
would print "4 \n 3"?

But using python for image processing is a strange choice. It is not exactly fast for this kind of problem...

Hehe, it was just part of a homework assignment.
The prof said we could use anything reasonably close to pseudocode - so Perl was right out ;-)
Usually I'd have used C but recently I talked with a friend about python and he thought I might have judged it too quickly; so I gave it a second try - needless to say he was right.
In fact, if you wanted to process images reasonably fast and knew the kernel was symmetric, you'd just do two convolutions, e.g. with (1,2,1) and (1,2,3) transposed as the kernels, thus reducing the complexity from O(N^2) to O(N) if N is the number of lines/columns of the kernel.

If you are interested in functional languages, but still want to use the occasional imperative construct, you should check out scala.

Thanks for the suggestion! I'm always interested in learning new languages.One problem (besides the stupid 24h time limit every day) are books. I've read parts of some Lisp related books on the net and they're either overly cryptic or too colloquial and pointless like one of grandpa simpsons' ramblings. I think you're right with what you said about the ivory tower guys. Python documentation, on the other hand, is excellent and so is Perl documentation - even though I probably never really got the whole point of this language.

Then there are some languages that I hate almost instantly (Basic, Pascal) and some I love almost instantly (C). But overall, good books are most important to me.

Anyway, thanks for your patient replies :-)

Edited 2007-12-14 15:50 UTC

Reply Score: 2

RE[6]: Lazyness in Haskell
by tuttle on Fri 14th Dec 2007 19:11 UTC in reply to "RE[5]: Lazyness in Haskell"
tuttle Member since:
2006-03-01

If you make a feature available, people _will_ use it, even if you warn them that it's deprecated, dangerous and might bring about the end of the universe as we know it ;-)


And that is exactly why mutable state is such a terrible idea in a multicore environment :-) !

So did I get this right, "x=3; y=x;{x=x+1; print x}; print y;"
would print "4 n 3"?


Yes. Most functional languages allow reassigning names in nested scopes, but do not allow reassigning names in the same scope.

But there is no fundamental reason why you should not allow reasigning names in one scope.

(In scala, you use "val" for an immutable value and "var" for a mutable variable. As long as the variable does not leave the scope, referential transparency is conserved.)

Thanks for the suggestion! I'm always interested in learning new languages.One problem (besides the stupid 24h time limit every day) are books.


There are a few good books on haskell, such as this one: http://www.amazon.com/Haskell-School-Expression-Functional-Programm...

There are no books yet about scala (I think one is coming out soon). But there is a mailing list and some people using it for real-world commercial applications. There is a web framework called lift: http://liftweb.net/index.php/Main_Page . It is somewhat similar to ruby on rails, but much faster and more scalable.

(This is by the way one major issue with most functional languages: the creators rarely use them for generic real-world applications. They usually write stuff like parsers and theorem provers.)

Haskell is very elegant and interesting from a theoretical point of view. But if you want to write real world applications, I would nevertheless suggest taking a look at scala.

Anyway, thanks for your patient replies :-)


You're welcome. I always enjoy a good language discussion.

Reply Score: 2

marcusoverhagen
Member since:
2005-08-20

What the author of the Article demonstrates in his Listing 4 is known as caching since ages, but he calls it lazy evaluation.


 public double getStandardDeviation()
 {
  if(stddev == null)
  {
   stddev = Math.sqrt(getVariance());
  }
  return stddev;
 }

You could as well rename his stddev into cachedStandardDeviation.

Edited 2007-12-13 10:53

Reply Score: 1

segedunum Member since:
2005-07-06

What the author of the Article demonstrates in his Listing 4 is known as caching since ages, but he calls it lazy evaluation.

I'm glad someone managed to point this out. He also takes the absolute simplest form of this. Things get a whole lot more complex when you have different sources of data to work with, such as from databases, and you try and work out what you can cache and what you can't. Is it static and can it be cached, or is it likely to change often to the point where where it needs to be recalculated or retrieved?

Reply Score: 2