The example with "What's halfway between 1 and 9" looks like having confirmation bias written all over it. Incidentally, both 3 and 5 and reasonably close 4.5, although who would round it to 3 is still debatable. Extrapolating that humans prefer, or are somehow predisposed to think logarithmically, is a stretch; one could literally come with an infinity of cases where this doesn't hold. I think practical evidence shows this, too -- have you actually seen how much engineering freshmen fight with logarithmic plots? Who the hell thinks 10 is midway between 1 and 100? Oh wait -- it's base *two* logarithm? Based on the fact that, you know, f(x) = log2(x) is close enough to f(x) = x for small x that you can sort of pretend "some traditional societies" -- a formulation that would trigger raised eyebrows even on wikipedia -- don't mind a little rounding there? Really?

The description of the paper itself seems legit (although, since in the well-respected tradition of free flow of ideas, it's only available for a considerable fee, I have no thought of actually checking that out myself), but the way it's covered in the article totally sucks. The paper appears to imply that this kind of rounding doesn't apply to just any kind of numbers, and that it also doesn't apply to just any kind of *information*. Some of our peripheral processing is done logarithmically -- think about sound sensibility, for instance -- so it would make sense if this is how the whole chain would be wired up. How this is connected with the article's introduction, other than the word logarithm, is beyond me.

4 on left of 5, 4 on right of 5

1 2*3*

4 5 6

7 8 9

makes no sense either, because there's no way 1 2 6 9 looks anything like the other half of 4 5 7 8

It may just be me, but I can't even generalize the method (for my own amusement/understanding) they are using to come up with this.Edited 2012-10-08 23:32 UTC

A person who had no simple math skills may have decided to try 3 as a guess.

get a real specialization you dropouts!

I wonder what would be their answer if the question was the middle of 1 and 16 ???Edited 2012-10-09 00:11 UTC

Am I the only one who immediately said "4.5" for this one?

Atleast I answered 5 to the question: 1 to 9 is 8 numbers, half of which is 4, but since we're starting from 1 instead of 0 the answer is 5. A quite typical programming maths problem, actually, you see these kinds of things all the time and beginners doing the exact same mistake as you did.

You are so wrong. The question is half way between 1 and 9 not (0 and 9). The difference between 1 and 9 happens to be 8 and half of 8 happens to be 4. Therefore there is only one correct answer (4). If the question was halfway between 0 and 9 the exact answer will be 4.5 and rounding up to the nearest unit it will be 5.

This is similar to how people whose native language lacks larger integers (e.g. they have words for: one, two, three, a few, and a lot) have difficulty with math like 17 + 6. They're always close, but rarely get it exactly right. Traditionally the explanation is that named integers help us remember exact quantities, but it could be that such people are operating under a logarithmic scale.

From a practicality standpoint, the theory makes sense. Orders of magnitude are far more important than exact numbers. But I think they still have a lot of research to do before most people will believe it.

The mean of a and b is (a

People can get fired up over a few thousand people dying in two towers, but think nothing of the orders of magnitude more that resulted from their overreaction.

It is not overly surprising. Sound intensity, in decibel, is also a logarithmic scale. Star brightness magnitude, defined under ancient Greece by astronomers using only their unaided eyes, has been fitted in modern times to a logarithmic scale.

I've heard in the past that it was because our biological sensors have a logarithmic response to stimuli. In such a case, this proof that thought process too follow logarithmic scales would still be a new, relatively surprising result

Am I the only one who immediately said "4.5" for this one?

Sometimes, Winston, it's 4. Sometimes it is 5. Sometimes it is 3. And sometimes it is all of them at once. :-)

More interesting (IMHO):

Yesterday I was listening to a podcast where someone said 0.999999999999-> ad inifinitum is the same as exactly 1, because there is no other number between those two numbers so they must be the same.

In practice they are basically the same thing of course, but in maths they shouldn't be, then again they are.

This seems like a lot of work to explain a peculiar result, when I see no evidence that the researchers even tried to verify that those answering 3 (or even 5) actually understood the question. The bias could come from having an unexpected interpretation of "half way," where a more careful definition might yield a different answer.

You, like a lot of the other commenters, missed the point of this research. This research isn't about testing how smart or dumb people are.

The research is about teasing out the way the brain actually works. That's why they want people to answer intuitively without thinking much about it. That way, we can see that 1) There really is a discrepancy between our intuition and learned behaviour 2) What form this discrepancy takes.

The research is about teasing out the way the brain actually works. That's why they want people to answer intuitively without thinking much about it.

I still find it difficult to believe that people would actually answer 3. Maybe there are such people, but the article does not quote the related statistics from the paper, and the wording does not make me believe it even could. (Anyway, small children can't count).

Also, this simple question does not form a very strong basis for building a whole theory on (in all fairness, judging from the article, they have other arguments). I would be interested what people who answered 3 (if there ARE such people) would answer to other ranges, e.g. 1 and 50 (most likely 10, not 7), 1 and 16 or 1 and 100. I like to play with the thought that as the number gets bigger, the answers would converge to the arithmetic mean.

Well, if this were true, I'd expect at least some languages to exhibit at least traits of logarithmic numerals. There are different systems: octal, decimal, duodecimal, you name it, but I'm afraid logarithmic is just not one of them. I call rubbish, in the modern American pseudo-scientific and splendidly pleased with oneself style.

There are logarithmic numerical expressions in all languages - eg small, large, huge.

In pre-agrarian socities there is no real need for precise numerals larger than about 5. It is easy enough to divide food by visual means or describe a distance as "three days walking".

I never thought it had anything to do with testing intelligence. What I'm saying is that there's room for different interpretations of the question. So there are two places that a subject can deviate from expected responses. One is that they understand the question and have a different concept of "in between." The other is that they didn't interpret the question in the way that the researchers assume they should.

The point of the research IS about how people, at different ages, have different interpretations of the question.

The great majority of languages have native numerals up to at least ten. Which makes sense if you think about counting on your fingers. Anyhow, I fail to see how dividing the food by visual means or measuring relatively short distances in days of walking supports, or actually even relates to the idea of logarithmic perception of numbers.