I honestly can't tell if this is serious and I'm missing something or they're actually trolling me. It's on web.mit.edu so it should be the former, but sweet mother of God does it look like the latter.

The example with "What's halfway between 1 and 9" looks like having confirmation bias written all over it. Incidentally, both 3 and 5 and reasonably close 4.5, although who would round it to 3 is still debatable. Extrapolating that humans prefer, or are somehow predisposed to think logarithmically, is a stretch; one could literally come with an infinity of cases where this doesn't hold. I think practical evidence shows this, too -- have you actually seen how much engineering freshmen fight with logarithmic plots? Who the hell thinks 10 is midway between 1 and 100? Oh wait -- it's base *two* logarithm? Based on the fact that, you know, f(x) = log2(x) is close enough to f(x) = x for small x that you can sort of pretend "some traditional societies" -- a formulation that would trigger raised eyebrows even on wikipedia -- don't mind a little rounding there? Really?

The description of the paper itself seems legit (although, since in the well-respected tradition of free flow of ideas, it's only available for a considerable fee, I have no thought of actually checking that out myself), but the way it's covered in the article totally sucks. The paper appears to imply that this kind of rounding doesn't apply to just any kind of numbers, and that it also doesn't apply to just any kind of *information*. Some of our peripheral processing is done logarithmically -- think about sound sensibility, for instance -- so it would make sense if this is how the whole chain would be wired up. How this is connected with the article's introduction, other than the word logarithm, is beyond me.

Huh! Huh! Huh1

You are so wrong. The question is half way between 1 and 9 not (0 and 9). The difference between 1 and 9 happens to be 8 and half of 8 happens to be 4. Therefore there is only one correct answer (4). If the question was halfway between 0 and 9 the exact answer will be 4.5 and rounding up to the nearest unit it will be 5.

1 2 3 4 *5* 6 7 8 9

4 on left of 5, 4 on right of 5

1 2*3*

4 5 6

7 8 9

makes no sense either, because there's no way 1 2 6 9 looks anything like the other half of 4 5 7 8

It may just be me, but I can't even generalize the method (for my own amusement/understanding) they are using to come up with this.

*Edited 2012-10-08 23:32 UTC*

3^0 = 1, 3^2 = 9, so apparently kids and adults who weren't taught the algebra naturally think of 3^1 = 3 as half-way between the two. IOW, we don't quantify numbers by counting, we look at order of magnitude.

This is similar to how people whose native language lacks larger integers (e.g. they have words for: one, two, three, a few, and a lot) have difficulty with math like 17 + 6. They're always close, but rarely get it exactly right. Traditionally the explanation is that named integers help us remember exact quantities, but it could be that such people are operating under a logarithmic scale.

From a practicality standpoint, the theory makes sense. Orders of magnitude are far more important than exact numbers. But I think they still have a lot of research to do before most people will believe it.

Atleast I answered 5 to the question: 1 to 9 is 8 numbers, half of which is 4, but since we're starting from 1 instead of 0 the answer is 5. A quite typical programming maths problem, actually, you see these kinds of things all the time and beginners doing the exact same mistake as you did.

Perhaps this kind of logarithmic operation is also manifesting as genocide blindness. One or two people murdered can make the news, but when it starts turning into a massacre (especially far away), people seem to care less.

People can get fired up over a few thousand people dying in two towers, but think nothing of the orders of magnitude more that resulted from their overreaction.

I've heard in the past that it was because our biological sensors have a logarithmic response to stimuli. In such a case, this proof that thought process too follow logarithmic scales would still be a new, relatively surprising result

Well, if this were true, I'd expect at least some languages to exhibit at least traits of logarithmic numerals. There are different systems: octal, decimal, duodecimal, you name it, but I'm afraid logarithmic is just not one of them. I call rubbish, in the modern American pseudo-scientific and splendidly pleased with oneself style.

There are logarithmic numerical expressions in all languages - eg small, large, huge.

In pre-agrarian socities there is no real need for precise numerals larger than about 5. It is easy enough to divide food by visual means or describe a distance as "three days walking".

Well, how do you know *small*, *large* and *huge* are logarithmic? Can you back it up with any research?

The great majority of languages have native numerals up to at least ten. Which makes sense if you think about counting on your fingers. Anyhow, I fail to see how dividing the food by visual means or measuring relatively short distances in days of walking supports, or actually even relates to the idea of logarithmic perception of numbers.

My guess would be 5. My 9 year old son said 5, but then said it could also be 5.5. So five seems to be the dominant number in any answer.

More interesting (IMHO):

Yesterday I was listening to a podcast where someone said 0.999999999999-> ad inifinitum is the same as exactly 1, because there is no other number between those two numbers so they must be the same.

In practice they are basically the same thing of course, but in maths they shouldn't be, then again they are.

This seems like a lot of work to explain a peculiar result, when I see no evidence that the researchers even tried to verify that those answering 3 (or even 5) actually understood the question. The bias could come from having an unexpected interpretation of "half way," where a more careful definition might yield a different answer.

You, like a lot of the other commenters, missed the point of this research. This research isn't about testing how smart or dumb people are.

The research is about teasing out the way the brain actually works. That's why they want people to answer intuitively without thinking much about it. That way, we can see that 1) There really is a discrepancy between our intuition and learned behaviour 2) What form this discrepancy takes.

I still find it difficult to believe that people would actually answer 3. Maybe there are such people, but the article does not quote the related statistics from the paper, and the wording does not make me believe it even could. (Anyway, small children can't count).

Also, this simple question does not form a very strong basis for building a whole theory on (in all fairness, judging from the article, they have other arguments). I would be interested what people who answered 3 (if there ARE such people) would answer to other ranges, e.g. 1 and 50 (most likely 10, not 7), 1 and 16 or 1 and 100. I like to play with the thought that as the number gets bigger, the answers would converge to the arithmetic mean.

I never thought it had anything to do with testing intelligence. What I'm saying is that there's room for different interpretations of the question. So there are two places that a subject can deviate from expected responses. One is that they understand the question and have a different concept of "in between." The other is that they didn't interpret the question in the way that the researchers assume they should.

The point of the research IS about how people, at different ages, have different interpretations of the question.

I notice that virtually no one commneting seems to have read the actual paper. It talks about how children and traditional hunter-gatherers think about numbers.

**One of the researchers' assumptions is that if you were designing a nervous system for humans living in the ancestral environment — with the aim that it accurately represent the world around them — the right type of error to minimize would be relative error, not absolute error. After all, being off by four matters much more if the question is whether there are one or five hungry lions in the tall grass around you than if the question is whether there are 96 or 100 antelope in the herd you've just spotted.**