Linked by Thom Holwerda on Thu 21st Jul 2011 14:10 UTC, submitted by Jennimc
Mozilla & Gecko clones "Over the last couple of weeks, Mozilla has finally stepped up its 64-bit testing process. There are now five slaves dedicated to building Firefox for Windows x64, which means that from Firefox 8 and onwards, you'll be able to pick up 64-bit builds that are functionally identical to its 32-bit cousins but operating in native 64-bit CPU and memory space." Th 64bit version is about 10% faster, benchmarks show.
Order by: Score:
RE: Comment by InformativeCommenter
by merkoth on Thu 21st Jul 2011 14:41 UTC in reply to "Comment by InformativeCommenter"
merkoth Member since:
2006-09-22

Excuse me: what?

Reply Score: 3

1c3d0g Member since:
2005-07-06

Doesn't matter what you're saying, that tripe you just wrote is off-topic and completely irrelevant to the discussion at hand. If you've got nothing of value to add, then don't comment. Period.

"Better to remain silent and be thought a fool than to speak out and remove all doubt."

Reply Score: 5

Happy to see a benefit
by jgagnon on Thu 21st Jul 2011 14:57 UTC
jgagnon
Member since:
2008-06-24

It seems like most things that do the initial leap to 64 have a speed hit of some sort, so I'm glad that this actually shows an improvement.

I'm now very used to doing my daily work on a box with 8 GB of RAM so the extra memory usage doesn't mean much to me for 64-bit stuff.

Reply Score: 1

RE: Happy to see a benefit
by Alfman on Thu 21st Jul 2011 17:22 UTC in reply to "Happy to see a benefit"
Alfman Member since:
2011-01-28

The main improvement with x86-64 was additional general purpose registers, that's long been a significant bottleneck of the x86 ISA (particularly since all stack variables need to be synchronized with multicore cache coherency protocols). Unfortunately the number of general purpose registers on the new AMD64 ISA is still only 16 (compare to 32 for PPC).

Beyond that, 64 bit processing can actually slow things down due to the need to handle more bits in the ALU and memory addressing. Also, 64bit pointers and variables consume twice as much ram, causing cache to run out more quickly.

I was disappointed that AMD developed AMD64, not because it wasn't an improvement. It is, but because it means the complex x86 ISA will remain the top desktop CPU for another decade or two at the expense of better alternatives.

The x86 has too much logic built around addressing it's legacy inefficiencies. This makes no sense in terms of new cpu designs, and is the primary reason the x86 consumes so much more energy than other cpus on the market.

Even intel was trying to get away from x86, amd's maneuver sucked us right back in.

Reply Score: 4

RE[2]: Happy to see a benefit
by AndrewZ on Thu 21st Jul 2011 17:40 UTC in reply to "RE: Happy to see a benefit"
AndrewZ Member since:
2005-11-15

I was disappointed that AMD developed AMD64, not because it wasn't an improvement. It is, but because it means the complex x86 ISA will remain the top desktop CPU for another decade or two at the expense of better alternatives.

If you don't like X86-64, there is always: SPARC, POWER7, MIPS64. Get one of these so you can conceptualize cleanly in 64-bit brain space :-)

Reply Score: 4

RE[3]: Happy to see a benefit
by Alfman on Thu 21st Jul 2011 19:03 UTC in reply to "RE[2]: Happy to see a benefit"
Alfman Member since:
2011-01-28

AndrewZ,

"If you don't like X86-64, there is always: SPARC, POWER7, MIPS64. Get one of these so you can conceptualize cleanly in 64-bit brain space :-)"

A desktop user really has to go out of their way to get alternatives to x86. I've used Solaris on Sparc, and OSX on PPC, but are those even commercially viable targets anymore?

I suppose PPC systems are being sold as sony game console's, but they actively discourage independent devs like me.

Is there such a thing as a MIPS desktop?


The 64bit Alpha processer was (and may still be) superior to x86, but the pervasiveness of x86 is unavoidable.

I seriously considered becoming an itanium developer, but it was much too expensive. In any case, that processor was brutally rejected in the marketplace because AMD came out with a 64 processor which ran legacy x86 code natively.

Technically the itanium specs are astounding, 128 general purpose 64bit integer registers, a sliding parameter window eliminates the need to save/restore these to a memory stack for each function call (as is normally required by calling conventions).

They did away with power hungry superscalar architecture (the main purpose of which is to work around the limited number of registers and limited parallelism of the x86).

The ISA supports explicit parallelism for very significant speedup to many common algorithms.

This is all every exciting, but it performed awfully when running sequential x86 code, which needs a complex superscalar cpu in order to perform decently. It quickly earned a reputation for being slow for this reason.

So long as software remains proprietary and cannot even be recompiled, x86 compatibility will prove to be more important than anything else in the CPU development arena.

All of the superior 64bit ISAs would have had a real chance to establish a broad consumer market if only amd-64 had been avoided.

Reply Score: 3

RE[4]: Happy to see a benefit
by AndrewZ on Thu 21st Jul 2011 20:16 UTC in reply to "RE[3]: Happy to see a benefit"
AndrewZ Member since:
2005-11-15

NEC released RISC MIPS64 in 1998. Loongson has a low power 64-bit MIPS laptop. MIPS is a very nice architecture with a very clean instruction set. But after you port Linux you still only have a laptop that runs Linux. And slowly too!

Ultimately these architectures are all still only Turing compatible. Ultimately they still run all the same high level software.

Edited 2011-07-21 20:17 UTC

Reply Score: 2

RE[5]: Happy to see a benefit
by Alfman on Thu 21st Jul 2011 21:04 UTC in reply to "RE[4]: Happy to see a benefit"
Alfman Member since:
2011-01-28

AndrewZ,

"NEC released RISC MIPS64 in 1998. Loongson has a low power 64-bit MIPS laptop. MIPS is a very nice architecture with a very clean instruction set. But after you port Linux you still only have a laptop that runs Linux. And slowly too!"

Interesting.

A few years ago I had the opportunity to compare debian running on an older embedded PPC 233mhz versus a (then new) ARM9 400mhz. I know one can't compare speeds directly, but the PPC was many times faster for all CPU loads.

I was disappointed with the ARM's performance, although it's entirely possible this was due to inadequate software optimization on the ARM architecture.

Debian ports:

Intel x86
Moterola 68k (ceased)
Sun Sparc
Alpha
PowerPC
ARM
MIPS
HP PA-RISC
Itanium
IBM S/390
AMD64
SuperH
Renesas m32r
AVR32

Reply Score: 2

RE[6]: Happy to see a benefit
by dsmogor on Thu 21st Jul 2011 21:31 UTC in reply to "RE[5]: Happy to see a benefit"
dsmogor Member since:
2005-09-01

It's all probably due to:
- poor compiler support
- weaker FP if any
- PPC being out of order , i remember 601 and 604 were quite decent archs comparable to Pentium Pro
- weak memory bus of the arm (wide mem buses are power hunrgy)

Reply Score: 2

RE[4]: Happy to see a benefit
by dsmogor on Thu 21st Jul 2011 21:27 UTC in reply to "RE[3]: Happy to see a benefit"
dsmogor Member since:
2005-09-01

Too bad the theory didn't meet the real world demands.
Newest Itanium incarnation is indeed out of order (which is ridiculous given the whole point of VLIW isa was to avoid it). Interestingly the ( change was required to save IA64 performance on business critical computation (it was doing just fine on HPC tasks) which was getting spanked by old fashioned x86 competition despite being a transistor monster.
So no, IA64 on the desktop would mean even more wasted power for all those execution units (which simply must be there as they are explicitly part of the isa).

In the Power efficient side of things ARM is giving intel run for its money, bc it's simple where needed.

Reply Score: 2

RE[5]: Happy to see a benefit
by Alfman on Thu 21st Jul 2011 21:57 UTC in reply to "RE[4]: Happy to see a benefit"
Alfman Member since:
2011-01-28

dsmogor,

"Too bad the theory didn't meet the real world demands."

Well I know that but there are two major impediments intel was/is unable to overcome:

1. The new architecture would always be judged by it's ability to execute legacy x86 code. This means it probably never had a chance.

2. Software algorithms generally need to be re-written to take advantage of explicit parallelism. Compiler improvements are needed too. To really see the raw performance of itanium, everyone needed to re-engineer their software, and very few outside of HPC were actually willing to commit to that.

I wasn't trying to promote Itanium in particular, just saying that the transition from 32bit to 64bit would have been a good an opportunity as any to leave x86 behind in favor of better desktop architectures. Today there's even less incentive than a decade ago because x86-64 is "good enough", x86-32 was not.

Reply Score: 1

Bill Shooter of Bul Member since:
2006-07-14

1. The new architecture would always be judged by it's ability to execute legacy x86 code. This means it probably never had a chance.


Disagree. There were linux and windows servers that ran on itanium upon release. All of the relevant server apps worked, at worst it was a recompile.


2. Software algorithms generally need to be re-written to take advantage of explicit parallelism. Compiler improvements are needed too. To really see the raw performance of itanium, everyone needed to re-engineer their software, and very few outside of HPC were actually willing to commit to that.


The Compiler was the big sticking point. The hardware designers realized the processor could be faster if the compiler was smarter and designed around the assumption that the compiler would do its magic for it. Turns out that was really difficutl to do, so the native software for itanium sucked because the compilers weren't doing the magic well enough.


The really important factors for processors these days are : Speed, power consumption, size. Obviously those are not all independent of each other. I think there are still opportunity for other architectures, but the start up costs for a new architecture are pretty crazy. Arm is killing x86 on the low end and creeping up towards larger devices.

Reply Score: 2

RE[7]: Happy to see a benefit
by Alfman on Fri 22nd Jul 2011 21:37 UTC in reply to "RE[6]: Happy to see a benefit"
Alfman Member since:
2011-01-28

Bill Shooter of Bul,

"Disagree. There were linux and windows servers that ran on itanium upon release. All of the relevant server apps worked, at worst it was a recompile."

At worst, it's emulation, which is exactly what happened with nearly all windows software. And that performance was pathetic.

At the very least a recompile would overcome that overhead. but it still doesn't automatically convert code to use the processor's inherent parallelism.

If I recall, GCC didn't even attempt to vectorize x86 code using SSE until 2004/5, some 6 years after it's introduction. It really was necessary to rewrite code to use intrinsics. Even today intrinsics are generally the most reliable approach. There is nothing wrong with that, but it's more than a recompile.

I honestly can't speak to GCC's effectiveness on itanium since I ended up not developing for it.

(Out of curiosity, is Linux supported under ICC? I searched briefly but all I found were posts of people having trouble)

"The Compiler was the big sticking point. The hardware designers realized the processor could be faster if the compiler was smarter and designed around the assumption that the compiler would do its magic for it."

Yes absolutely, the processor was ahead of compiler technology. Instead of having the processor analyze static code in real time, it makes far more sense to have the compiler to do a better analysis of the code ahead of time.

This shouldn't be controversial to anybody, yet we still have a conundrum in that CPUs which don't analyze code at runtime will be very poor at handing legacy x86 code.

"I think there are still opportunity for other architectures, but the start up costs for a new architecture are pretty crazy."

Of course I agree with that, but we had the opportunity before we transitioned to AMD64. Instead of migrating to a new 64 bit architecture which shared the architectural limitations of x86, we should have migrated to a better 64bit architecture.

Reply Score: 2

RE[2]: Happy to see a benefit
by jgagnon on Thu 21st Jul 2011 17:59 UTC in reply to "RE: Happy to see a benefit"
jgagnon Member since:
2008-06-24

Every time someone claims that x86 (and x64) are inefficient I have to chuckle. The CISC vs RISC argument is as pointless now as it was many years ago. Intel and AMD have been able to go above and beyond every fictitious "hurdle" people have claimed since day one. There is nothing about the x86 instruction set that makes it inherently bad or inefficient. The only guarantee is that it is complicated to do some things.

Intel, and to a lesser extent AMD, have proven that they can extend the instruction set to handle a wide variety of tasks with good results. I expect that trend to continue for many years to come.

I feel like the x86 naysayers need to look back on history before they make their protest signs of the coming apocalypse. If nothing else we can all cook the zombies with our overheating chips. ;)

Reply Score: 1

RE[3]: Happy to see a benefit
by _txf_ on Thu 21st Jul 2011 18:52 UTC in reply to "RE[2]: Happy to see a benefit"
_txf_ Member since:
2008-03-17

Every time someone claims that x86 (and x64) are inefficient I have to chuckle. The CISC vs RISC argument is as pointless now as it was many years ago.


Just because he was talking about the arch, it doesn't necessarily mean that it was a criticism against cisc.

In practice it is. Can you name any other cpu arch that is CISC besides x86? I can only think of one of the top of my head which is the 8051 uC cpu. At the same time I could make a very long list of RISC cpus.

New cpu architectures have tended to go with RISC. I think the OP suggested going with Itanium (which grew from pa-risc).

However modern x86 has sort of subsumed a lot of RISC ideas. So yeah, the argument is pointless (then again there still is the overhead of the various modes and decoding to uOPS).

Edited 2011-07-21 18:58 UTC

Reply Score: 2

RE[4]: Happy to see a benefit
by phoenix on Thu 21st Jul 2011 20:20 UTC in reply to "RE[3]: Happy to see a benefit"
phoenix Member since:
2005-07-11

"Every time someone claims that x86 (and x64) are inefficient I have to chuckle. The CISC vs RISC argument is as pointless now as it was many years ago.


Just because he was talking about the arch, it doesn't necessarily mean that it was a criticism against cisc.

In practice it is. Can you name any other cpu arch that is CISC besides x86? I can only think of one of the top of my head which is the 8051 uC cpu. At the same time I could make a very long list of RISC cpus.

New cpu architectures have tended to go with RISC. I think the OP suggested going with Itanium (which grew from pa-risc).
"

Technically, Intel and AMD x86 CPUs are no longer CISC. The instruction set it, but the CPU isn't. The front-end instruction decoder is CISC, but it translates those into micro-ops and whatnot that run on a very-RISC-like core.

And if you look at the block diagrams for Intel and AMD CPUs, you'll find that that front-end decoder takes up almost 50% of the CPU die. Behind the cache, it's the largest consumer of die space/transistors.

So, yes, x86/amd64 CPUs are very inefficient. Just think how much faster, smaller, parallel the CPU/cores could be if that x86-front-end-decoder could be removed and the CPU just run the RISC micro-ops and whatnot directly.

I believe it was the Pentium that was the last truly CISC x86 CPU.

Ars Technica ran a nice series of articles about this back in the P4 / Athlon64 / Opteron days.

Edited 2011-07-21 20:20 UTC

Reply Score: 4

RE[5]: Happy to see a benefit
by Drumhellar on Thu 21st Jul 2011 20:49 UTC in reply to "RE[4]: Happy to see a benefit"
Drumhellar Member since:
2005-07-12

And if you look at the block diagrams for Intel and AMD CPUs, you'll find that that front-end decoder takes up almost 50% of the CPU die. Behind the cache, it's the largest consumer of die space/transistors.


Really? I thought cache took up the majority of the die.

I've been looking and looking for an image that confirms your 50% figure, but I cannot find one. Got a link?

EDIT: Oops. Misread your statement (and I quoted it, too.) You mean 50% of non-cache die space?

Edited 2011-07-21 20:51 UTC

Reply Score: 2

RE[6]: Happy to see a benefit
by phoenix on Thu 21st Jul 2011 20:50 UTC in reply to "RE[5]: Happy to see a benefit"
phoenix Member since:
2005-07-11

Hey, I said "almost" and "after the cache". ;) The 50% was pulled from thin-air, but the front-end decoder is still a huge chunk of the silicon, especially in comparison to the rest of the CPU that does the actual work.

Reply Score: 2

RE[5]: Happy to see a benefit
by dsmogor on Thu 21st Jul 2011 21:33 UTC in reply to "RE[4]: Happy to see a benefit"
dsmogor Member since:
2005-09-01

I have read in Core 2 it's below 15%.

Reply Score: 2

RE[5]: Happy to see a benefit
by zima on Mon 25th Jul 2011 07:25 UTC in reply to "RE[4]: Happy to see a benefit"
zima Member since:
2005-07-06

I believe it was the Pentium that was the last truly CISC x86 CPU.

P5 core seems to be somewhat revived in Atom and Larrabee... without digging too much, last section of summary: http://en.wikipedia.org/wiki/P5_(microarchitecture)

(description of Bonnell (Atom) does mention translation into micro-ops ...but, curiously, also how majority of x86 instructions produce on Atom only one u-op; apparently they are very analogous to x86 CISC)

Reply Score: 1

RE[3]: Happy to see a benefit
by re_re on Thu 21st Jul 2011 19:17 UTC in reply to "RE[2]: Happy to see a benefit"
re_re Member since:
2005-07-06

One thing you failed to mention is that in general there has been a bit of crossover on both the CISC and RISC sides.... X86 and X86_64 have both moved a little bit towards the RISC side and vice versa (PPC anyway). The bottom line is when you look at the ARM processors or the Intel/AMD processors ....... they all have good preformers and poor preformers.

Reply Score: 2

RE[3]: Happy to see a benefit
by Alfman on Thu 21st Jul 2011 19:28 UTC in reply to "RE[2]: Happy to see a benefit"
Alfman Member since:
2011-01-28

jgagnon,
"Every time someone claims that x86 (and x64) are inefficient I have to chuckle. The CISC vs RISC argument is as pointless now as it was many years ago."

It is pointless, but only because the market chooses processors by compatibility with existing software, not technical superiority. If we could start with a clean slate, then these technical issues would become very important.


"Intel and AMD have been able to go above and beyond every fictitious 'hurdle' people have claimed since day one. There is nothing about the x86 instruction set that makes it inherently bad or inefficient."

They're not fictitious hurdles. We can always throw more transistors at the problem by running two speculative branches in parallel (for instance). But this is a huge waste of power with limited benefit. The lack of the x86 ISA's ability to communicate algorithmic information to the CPU is exactly why the CPU is forced to speculate in the first place.

The lack of general purpose registers is an undebatable limitation of the x86 ISA compared to every single other architecture that I'm familiar with. This the most important change in the AMD64 ISA.

The limitations of x86 ISA hold back not only hardware but also compiler development.

"The only guarantee is that it is complicated to do some things."

Isn't that a good reason to switch ISAs?

"Intel, and to a lesser extent AMD, have proven that they can extend the instruction set to handle a wide variety of tasks with good results. I expect that trend to continue for many years to come."

Yea, it's been patched over and over again. Take SSE for example, it's a good thing, but it's poorly integrated into the rest of the CPU. SSE registers are not compatible with general purpose registers or instructions even though often times developers would like them to be. This often results in us moving data around on the CPU for no reason than to satisfy ISA semantics. This is "fixed" by creating new SSE instructions with even more overlap with generic instructions. The FP unit is a great example of unnecessary complexity. All this requires more complex circuitry and compilers, more energy hungry systems.

There is no doubt that if we could start from scratch, we wouldn't do it this way, it exists the way it does today for purely legacy reasons.

"I feel like the x86 naysayers need to look back on history before they make their protest signs of the coming apocalypse. If nothing else we can all cook the zombies with our overheating chips. ;) "


What apocalypse? I claimed that the dominance of x86 is holding back superior architectures. I quote myself "it means the complex x86 ISA will remain the top desktop CPU for another decade or two at the expense of better alternatives."

Edited 2011-07-21 19:33 UTC

Reply Score: 2

RE[4]: Happy to see a benefit
by Drumhellar on Thu 21st Jul 2011 20:32 UTC in reply to "RE[3]: Happy to see a benefit"
Drumhellar Member since:
2005-07-12

Put the whip down. The horse is dead.

This is as pointless as the "Monolithic vs Microkernel" debate, or, even better, the "Edward vs Jacob" debate.

Reply Score: 2

RE[5]: Happy to see a benefit
by Alfman on Thu 21st Jul 2011 21:32 UTC in reply to "RE[4]: Happy to see a benefit"
Alfman Member since:
2011-01-28

Drumhellar,

"Put the whip down. The horse is dead."

Not dead yet, it'll probably come back in a decade or so. But you are right that people have been at it for many years. Many of the same points over and over again.


This 2003 link references an old osnews.com article!

http://forums.appleinsider.com/showthread.php?t=27572

I find these quotes charming:

"As little as 4-5 monthss ago there were people on these boards and some journalists who were urging Apple to go x86. Now all I can hear is ..... silence."

"PowerPC is already well established in the low power market, so adding the G5 gets them back in the high-end game. With the advantages of a better instruction set and architecture this should allow IBM/Apple to compete while spending far less money than Intel and AMD. Intel can afford this. AMD can't."


Anyways, it seems that things haven't really progressed. One poster sums it up thusly:

"But this is something only a programmer cares about... the end user really just cares about price/performance and compatibility (the latter has always been the pillar of x86's strength)."

Reply Score: 3

RE[3]: Happy to see a benefit
by viton on Fri 22nd Jul 2011 07:22 UTC in reply to "RE[2]: Happy to see a benefit"
viton Member since:
2005-08-09

There is nothing about the x86 instruction set that makes it inherently bad or inefficient.

1) Variable length instruction encoding resulting in
power comsumpting and complex instruction decoder.
2) 2 op non-orthogonal instruction set.
Generates a lot of useless MOVs.
3) fma

This is why Intel can't catch ARM for a few years even with their huge resources and process advantage.

Reply Score: 2

RE[3]: Happy to see a benefit
by Lennie on Fri 22nd Jul 2011 11:03 UTC in reply to "RE[2]: Happy to see a benefit"
Lennie Member since:
2007-09-22

Well, the CPU does translation from the CISC to RISC-like operations. If the compiler could compile more directly for those CPU-instructions it would save a lot on die-size.

But backwardscompatilbity is something which, atleast in the Windows-world, is very important.

And the Windows desktop is still one of the biggest markets for CPU's (not couting embedded) so backwardscompatilbity had to be available.

I know I would have liked to see it differently, but that is how it is.

Reply Score: 2

RE[4]: Happy to see a benefit
by smitty on Fri 22nd Jul 2011 21:32 UTC in reply to "RE[3]: Happy to see a benefit"
smitty Member since:
2005-10-13

Well, the CPU does translation from the CISC to RISC-like operations. If the compiler could compile more directly for those CPU-instructions it would save a lot on die-size.

But backwardscompatilbity is something which, atleast in the Windows-world, is very important.

This assumes that if the compiler generated RISC-like operations the CPU wouldn't still do translating. I'm not so sure that's a valid assumption.

Each time Intel releases a new architecture they tweak how those instructions are generated, in order to increase performance. They would have to have some sort of baseline common to all their CPUs, and then rely on software developers to recompile with newer compilers each time they released a CPU to take advantage of the new features. It wouldn't happen, except on new releases of software. I can easily see how they might still want to have some hardware on the CPU to automatically take advantage of some new hardware changes each release.

Where it really hurts them is in the low-power cell phone market where they are trying to compete with ARM. But I wouldn't be so sure they wouldn't choose to do it anyway on the more powerful desktop CPUs where power and die costs can be hidden more easily.

Reply Score: 2

RE[3]: Happy to see a benefit
by Kebabbert on Sun 24th Jul 2011 23:00 UTC in reply to "RE[2]: Happy to see a benefit"
Kebabbert Member since:
2007-07-27

Every time someone claims that x86 (and x64) are inefficient I have to chuckle. The CISC vs RISC argument is as pointless now as it was many years ago. Intel and AMD have been able to go above and beyond every fictitious "hurdle" people have claimed since day one. There is nothing about the x86 instruction set that makes it inherently bad or inefficient.

Well, there are sysadmins that refuse x86 in their server halls. Why is that? Because x86 is buggy.

An x86 cpu has over 1.000 asm instructions today. Many of them are not used. It is totally buggy and inefficient. Many transistors are just of no use. Have you heard about Intel and AMD recalling cpus because of bugs? How many has that happened? Many times.

Here is an article at anandtech that talks about the bloat that x86 is. x86 should die, and something cleaner should replace it. x86 has heritage back to 8bit cpus. That is horrendous.
http://www.anandtech.com/show/3593

"The total number of x86 instructions is well above one thousand" (!!)
"CPU dispatching ... makes the code bigger, and it is so costly in terms of development time and maintenance costs that it is almost never done in a way that adequately optimizes for all brands of CPUs."
"the decoding of instructions can be a serious bottleneck, and it becomes worse the more complicated the instruction codes are"
The costs of supporting obsolete instructions is not negligible. You need large execution units to support a large number of instructions. This means more silicon space, longer data paths, more power consumption, and slower execution."

Reply Score: 2

RE[4]: Happy to see a benefit
by Alfman on Mon 25th Jul 2011 00:37 UTC in reply to "RE[3]: Happy to see a benefit"
Alfman Member since:
2011-01-28

Kebabbert,

"Well, there are sysadmins that refuse x86 in their server halls. Why is that? Because x86 is buggy."


Many of us share your conclusion that the x86 has many undesiable characteristics compared to modern architectures.

But frankly I have no idea what you mean by "buggy". Yes, various x86 CPUs have been released with serious bugs and recalled. Off the top of my head I can think of three: the naturious intel fdiv bug, amd's mul overheat bug, the amd phenom cache bug. But these bugs are just that, bugs. It has absolutely nothing to do with the architecture.

Do you have any supporting evidence that x86 processors are buggier than other processors in proportion to market share?

Reply Score: 2

RE[2]: Happy to see a benefit
by bert64 on Mon 25th Jul 2011 07:03 UTC in reply to "RE: Happy to see a benefit"
bert64 Member since:
2007-04-23

Had AMD not developed AMD64, Intel would have been forced to instead, or we would just be stuck with PAE...

It is closed source software which keeps x86 alive, too many people have far too much invested in software that runs only on x86 to ever consider anything else... And commercial vendors will never bother supporting a new architecture until after there is sufficient user base (thus a catch 22 situation).

Reply Score: 2

RE[3]: Happy to see a benefit
by Alfman on Mon 25th Jul 2011 08:02 UTC in reply to "RE[2]: Happy to see a benefit"
Alfman Member since:
2011-01-28

bert64,

"Had AMD not developed AMD64, Intel would have been forced to instead, or we would just be stuck with PAE...

It is closed source software which keeps x86 alive, too many people have far too much invested in software that runs only on x86 to ever consider anything else... And commercial vendors will never bother supporting a new architecture until after there is sufficient user base (thus a catch 22 situation)."


Hypothetically if x86 support hadn't been developed beyond 32 bit, then eventually the market would have had a strong incentive to adopt another 64bit platform.

Keep in mind that all the AMD64 code in existence today had to be recompiled/tuned for AMD64 anyways - as far as high level developers and users are concerned, that could have just as easily been another 64bit architecture (closed source or not).

Any proprietary legacy code which is no longer supported and cannot be ported could still be run on actual x86 CPUs or be emulated (legacy code may still run faster under emulation than on the original system for which it was coded). With the knowledge that x86 is dead, developers for actively supported software would be wise to compile for the new architecture.


Of course, someone would probably manage to mess up the transition somehow, I'll give you that. But thinking as an engineer, there just has to be a way to leave x86 behind. There is precedence, apple did it a couple times already. A long time ago I saw a powerpc mac computer with a x86 processor running on a riser board. A little far fetched for most people, but then again any commercial shops with mission critical legacy x86 apps would probably not have a problem with this.

Reply Score: 2

RE[4]: Happy to see a benefit
by smitty on Mon 25th Jul 2011 09:25 UTC in reply to "RE[3]: Happy to see a benefit"
smitty Member since:
2005-10-13

But thinking as an engineer, there just has to be a way to leave x86 behind. There is precedence, apple did it a couple times already.

Get Microsoft and Intel to come to an agreement that all future chips/software will be based on a new architecture and it can happen. It's unlikely to ever happen, though, because they will be afraid of losing market share to Linux/Apple/AMD respectively - and that would definitely happen during the transition period. If they ever decide that those losses are necessary, the change could happen quite rapidly.

Reply Score: 2

RE[5]: Happy to see a benefit
by Alfman on Mon 25th Jul 2011 17:08 UTC in reply to "RE[4]: Happy to see a benefit"
Alfman Member since:
2011-01-28

smitty,

"Get Microsoft and Intel to come to an agreement that all future chips/software will be based on a new architecture and it can happen. It's unlikely to ever happen, though, because they will be afraid of losing market share to Linux/Apple/AMD respectively"

I'm curious why a mass migration to AMD64 would be different than a mass migration to some other 64bit architecture in this regards?

Microsoft themselves have ported windows to alternative platforms, but people kept x86 since it was always good enough. If it had ceased to be good enough (by not developing it past 32bit), then an architectural switch would have been imminent.

It's just my meager opinion though.

Reply Score: 2

RE[6]: Happy to see a benefit
by smitty on Mon 25th Jul 2011 18:52 UTC in reply to "RE[5]: Happy to see a benefit"
smitty Member since:
2005-10-13

I'm curious why a mass migration to AMD64 would be different than a mass migration to some other 64bit architecture in this regards?

Simple. Because by it's very nature, AMD64 means there will be no mass migration. It allows people to port over applications piecemeal, one at a time, so that apps that require the new functionality can get it right away while leaving all the old ones alone. If you don't have that compatbility, then it does force a complete mass migration of all apps at once - and if that happens, you've suddenly lost one of the big things keeping people from dumping windows, and keeping that compatibility would be a big incentive to keep buying chips from AMD that could run it even if Intel moved on to faster models on a new architecture.

Reply Score: 2

RE[7]: Happy to see a benefit
by Alfman on Mon 25th Jul 2011 20:16 UTC in reply to "RE[6]: Happy to see a benefit"
Alfman Member since:
2011-01-28

smitty,

"Simple. Because by it's very nature, AMD64 means there will be no mass migration. It allows people to port over applications piecemeal, one at a time, so that apps that require the new functionality can get it right away while leaving all the old ones alone."

I think it's wrong to say "there will be no mass migration", since there already has been, most commercial/free software today is available for AMD64, albeit the pace has been slow.


"If you don't have that compatbility, then it does force a complete mass migration of all apps at once"

I do not suggest breaking compatibility. Legacy productivity apps can be handled fine with (good) emulation. 32bit app OS/system DLL calls could be marshaled directly into the host OS where they are executed natively. If done right, this would be totally transparent and there would be no perceptible overhead. With an appropriate direct-x shim, even legacy games should run ok.


The main types of apps for which emulation is inadequate would be HPC and high load servers. If for some reason these cannot be ported, then one could always use a dedicated x86 system to run those processes. Why not? Or adopt a hybrid solution like the x86 riser I mentioned already.


Of course x86 compatibility takes engineering effort, but AMD/MS needed to go through similar engineering efforts just to allow x86 apps to run on AMD64. Obviously they needed to implement mechanisms for x86-32 calls to be marshaled into the x86-64 OS (WOW64). I don't believe this would have been more work if the 64bit architecture was non-x86, especially if they continue to follow standard calling conventions. However I'd be interested in your insight.

Reply Score: 2

Need 64 bit for all the memory leaks
by FunkyELF on Thu 21st Jul 2011 15:17 UTC
FunkyELF
Member since:
2006-07-26

Memory leaks are memory leaks and they'll have them in 64bit too.

I run 32bit Windows in a virtualized environment. We run applications built on Eclipse RCP.
If I have Outlook, and two of those Eclipse applications open along with Firefox the applications crawl.
That is with 3 tabs open.
With Chrome, it seems I can have as many open tabs as I want and it doesn't hurt performance.

Seriously... why do I need 500M of memory to have 3 tabs open? OSNews, Gmail, and Google+?

Reply Score: 2

JLF65 Member since:
2005-07-06

It's not memory leaks that you need 64 bit for, it's security. 64 bits gives a lot more room for address space randomization, which is a big factor in security these days.

Reply Score: 2

laffer1 Member since:
2007-11-09

That's the exciting part.. with 64bit firefox it can leak over 2GB of memory now! It's an innovation!

Reply Score: 1

bassbeast Member since:
2007-11-11

It is THIS, this right here, which finally after all these years got me off of Firefox and onto Chromium based, in my case Comodo Dragon. With my customers I have to support a VERY wide range of machines, from older office box P4s and netbooks to the latest quads and frankly I've found since the 3.6.x branch FF has been unsuitable for purpose on anything less than a 3Ghz P4 with HT.

I can launch FF and walk away and 4 hours later the memory has JUMPED several hundred Mb with nothing being used! And that doesn't count the huge CPU spikes, Lord help you if you launch a video tab on anything lower than a 2.8Ghz with FF as it will slam the CPU to 100% and leave it there for up to a minute killing responsiveness. Compare this to Dragon which even on a 1.8Ghz Sempron I can watch YouTube and the CPU will spike to MAYBE 60% for a few seconds but NEVER loses responsiveness.

So I'm sorry FF devs, but its a little too little its a little too late. For ages you told us the memory issues and CPU spikes were in our heads, you killed any chance of businesses using you with that last little killing support stunt you pulled, ran many of the extension devs over to Chrome with the same trick, and now you are simply too far behind. I truly hope you fix the mess and survive but as someone who used FF before it was even called FF frankly the direction you've been headed is the wrong one. Remember how FF was supposed to be the "fast and light" browser? Now sadly the upstart Chromiums kick the snot out of you while you suck down RAM and CPU like a fat guy at the all you can eat buffet.

Reply Score: 1

Old news
by diegocg on Thu 21st Jul 2011 15:42 UTC
diegocg
Member since:
2005-07-08

I have been using 64-bit firefox for years...in Linux. And 64 bit distros for even longer. This only news in the Windows world.

Reply Score: 11

RE: Old news
by BluenoseJake on Thu 21st Jul 2011 21:25 UTC in reply to "Old news"
BluenoseJake Member since:
2005-08-11

funny, It's only the last couple of years that we needed a 64Bit OS for desktops, but if you think that running 64Bit linux for years is some big deal, then please continue mouthing off. I (and most people) switched to a 64Bit OS when I had 4G or more, no reason to before. I got my first 4G computer in 2006, and I installed both Linux 64 and Windows 64. Running 64Bit Linux before that was just useless, really

It's not some sort of limitation of Windows that has kept Firefox 32Bit on Windows, either, It was Mozilla. IE has a 64Bit version, So it can be done. Mozilla just didn't (or couldn't) do it, I don't know why, and I don't care, 32Bit firefox runs just fine.

Reply Score: 2

RE[2]: Old news
by Valhalla on Fri 22nd Jul 2011 06:58 UTC in reply to "RE: Old news"
Valhalla Member since:
2006-01-24

I (and most people) switched to a 64Bit OS when I had 4G or more, no reason to before.

While more addressable RAM is certainly a factor, you can't discard the additional registers which certainly makes quite a difference in intense computation.

The x86 is a very register-starved cpu, with only 6 (eax, ebx, ecx, edx, esi, edi) really being general purpose, 64bit adds another 8 registers which can really help alot in heavy computing since registers are by far the fastest place in a cpu to store/retrieve data and being able to store in a register rather than pushing onto stack can have great impact in loop performance. Also being able to manipulate 64bits rather than 32bits per instruction obviously also is a benefit beyond that of addressable space.

Downside is larger code size due to address references being 64bit, but I'd say the benefits far outweigh that unless you are very starved for RAM.

Reply Score: 2

RE[2]: Old news
by Lennie on Fri 22nd Jul 2011 11:13 UTC in reply to "RE: Old news"
Lennie Member since:
2007-09-22

I don't know, I had a DEC Alpha way before that time it ran 64-bit Linux just fine. It didn't have as much legacy stuff to drag around so they had done a lot fo things right. RISC, many registers, I think one memory region instead of some odd limitations that are in x86/amd64.

Maybe 64-bit wasn't useful for my machine, but some people already had 4GB of memory at that time.

I know you could also install Windows on it, but as I understand it there where pretty much no applications available.

Reply Score: 2

RE[2]: Old news
by Surtur on Fri 22nd Jul 2011 11:28 UTC in reply to "RE: Old news"
Surtur Member since:
2009-04-15

[...] I (and most people) switched to a 64Bit OS when I had 4G or more, no reason to before. I got my first 4G computer in 2006, and I installed both Linux 64 and Windows 64. Running 64Bit Linux before that was just useless, really


While this may be true for many users like you I personally consider (being an OpenBSD user) native NX-bit support without hacks quite a feature. This has been discussed at http://www.openbsd.org/papers/auug04/mgp00001.html whereas the following page gives a nice overview over the problems regarding i386 http://www.openbsd.org/papers/auug04/mgp00017.html

Quoting the Wikpedia Article summarizing the NX-bit (http://en.wikipedia.org/wiki/NX_bit) it seems this is the same with many Linux distributions:

Some desktop Linux distributions such as Fedora Core 6, Ubuntu and openSUSE do not enable the HIGHMEM64 option by default, which is required to gain access to the NX bit in 32-bit mode, in their default kernel; this is because the PAE mode that is required to use the NX bit causes pre-Pentium Pro (including Pentium MMX) and Celeron M and Pentium M processors without NX support to fail to boot.


Ymmv of course as many people do not care. Although OpenBSD does not even support more than 4GB of memory (even on amd64) by default in a release atm (coming in November), given that there are no downsides, why should I settle for less...

Reply Score: 1

RE[3]: Old news
by BluenoseJake on Fri 22nd Jul 2011 12:29 UTC in reply to "RE[2]: Old news"
BluenoseJake Member since:
2005-08-11

32Bit Windows supports the NX bit, it's just off by default. It's required for DEP to work.

Edited 2011-07-22 12:30 UTC

Reply Score: 2

RE[4]: Old news
by BluenoseJake on Fri 22nd Jul 2011 12:52 UTC in reply to "RE[3]: Old news"
BluenoseJake Member since:
2005-08-11

Actually, I was wrong, it isn't required for DEP, but it'll use it if available.

Reply Score: 2

RE[4]: Old news
by Surtur on Fri 22nd Jul 2011 13:18 UTC in reply to "RE[3]: Old news"
Surtur Member since:
2009-04-15

32Bit Windows supports the NX bit, it's just off by default. It's required for DEP to work.


So? I never questioned that. My point was completely different. (When we are at it, which has nothing to do with it, OpenBSD even emulates it for Processors not capable of.) It was was about:

[...] Running 64Bit Linux before that was just useless, really


(See my post above again.)

As I argued I beg to defer in that regards. Windows default security policies are a completely different topic I did not lose a word about...

Edited 2011-07-22 13:20 UTC

Reply Score: 1

RE[5]: Old news
by BluenoseJake on Fri 22nd Jul 2011 14:21 UTC in reply to "RE[4]: Old news"
BluenoseJake Member since:
2005-08-11

This is what you said:

"While this may be true for many users like you I personally consider (being an OpenBSD user) native NX-bit support without hacks quite a feature"

That's what I replied to. 32Bit Windows has that feature. My original post was about the utility of running any 64Bit OS with less than 4G of ram. That was your response. You brought up OpenBSD, but at the time I was discussing Windows an Linux

Did you bang your head or something?

Edited 2011-07-22 14:24 UTC

Reply Score: 2

RE[6]: Old news
by Surtur on Fri 22nd Jul 2011 17:18 UTC in reply to "RE[5]: Old news"
Surtur Member since:
2009-04-15

Did you bang your head or something?


No need for argumentum ad hominem.

While proper quoting would be a nice thing, to make my point, I was giving you in my first paragraph an idea why 64bit OS versions (aside from 4GB of memory), like in my case OpenBSD, may make sense for some people:

While this may be true for many users like you I personally consider (being an OpenBSD user) native NX-bit support without hacks quite a feature


This refers to the situation that for OpenBSD/i386 (refering to the slides) only the emulation of W^X is used. If you want to use true W^X you have to use OpenBSD/amd64. Aside from that as this emulation seems to work for all IA-32 Processors, hence:

(When we are at it, which has nothing to do with it, OpenBSD even emulates it for Processors not capable of.)


To let that aside and go to Linux which you were actually refering to:

Some desktop Linux distributions such as Fedora Core 6, Ubuntu and openSUSE do not enable the HIGHMEM64 option by default, which is required to gain access to the NX bit in 32-bit mode, in their default kernel; this is because the PAE mode that is required to use the NX bit causes pre-Pentium Pro (including Pentium MMX) and Celeron M and Pentium M processors without NX support to fail to boot.


As pointed out explicitly in my last post this refers to the fact that there may be cases for people where this
Running 64Bit Linux before that was just useless, really
is wrong. (Quite clear as it is an generalization).

Hereby did I with no word question the capability of Microsoft Windows regarding DEP.

Hope I now made it clear for you this time...

Edited 2011-07-22 17:19 UTC

Reply Score: 1

RE: Old news
by andih on Thu 21st Jul 2011 22:36 UTC in reply to "Old news"
andih Member since:
2010-03-27

lol yeah agree ;)

I really don’t get what people see in windows anyway.. Windows is for noobs, really! Its designed for people needing only very basic functionality. Allright, you might get some advanced things done by:
¤Searching for useful apps on piratebay or something (and maybe catching a rootkit in the attempt). But its slow, risky, and the chances that you have to commit piracy to make stubborn windoze do what you like, is big.
¤Paying extra to MS or some 3rd party for using their solutions

In linux, a world of possibilities is just an aptitude (portage or whatever) and a .conf away! Fast, powerful, easy, and secure.. and even free! People that are still using windows must have very basic and standard needs or simply doesn’t know any better.

After I switched to linux I never download pirated SW. No need for that using linux. Piracy is a windows thing. Funny then that the companies that by all means want to stop SW piracy are the same that also probably would like to see linux and OSS dead. (Although they might not want to admit it)

Well, its about time that windows tries to catch up using 64bit. I lol when browsing VLSC and users are encouraged strongly to go for the 32bits version of MS office 2010 even on new 64 bits win7. wow,, Its 2011 and 64bit isn't exactly new.. well for other OSes at least. :p

Windows is strangly enough still popular and used all over.. but "popular" != "good",, Dvorak and Querty is a good example for that. Or Bush and Ron Paul :p
Even though i loathe windows, I guess it still has its uses.. for somebody at least, or that it seems.

Reply Score: 1

RE[2]: Old news
by Bill Shooter of Bul on Thu 21st Jul 2011 22:59 UTC in reply to "RE: Old news"
Bill Shooter of Bul Member since:
2006-07-14

Please, no more references to R*n P**l. I'd like one site where I can pretend intelligent people don't agree with him.

Reply Score: 2

RE[2]: Old news
by Spiron on Fri 22nd Jul 2011 01:39 UTC in reply to "RE: Old news"
Spiron Member since:
2011-03-08

You're either an idiot or a troll, cause last time i checked a hell of a lot of the market leaders in software, thinking Photoshop and Maya, run on windows and not natively on linux. As to your comments about piracy, most users don't commit it, and don't need to. If you really need to then its a relatively fast process thanks to BitTorrent. And just so you know most people can make windows do what i want without EVER having to go to ThePirateBay. there are programs out there that aren't open-source but are free that give you easy access to advanced usage of Windows.

And unless you've really been living under a rock for the last 5 years you would know that Windows supports 64bit, and its rather good actually. Microsoft cannot be held responsible for the mozilla foundation not wanting to port firefox to 64bit windows. IE has been 64 bit since Win7, so essentially 2008 if you include prebuilds and beta's.

There are many reason why Windows is still popular, the main one being that people are used to it. They have used it at home for the last 15 years and in buisnesses for around the same time and people like what is familiar. There are other reasons, like the stigma's attached to linux and other alternate OS's, for example Linux still being viewed as the 'nerds' os and thus people either thinking they are not smart enough for it OR them holding nerds in contempt. Another example is that there are stigma's against free-stuff and quality, but that is a seperate disscussion. And you comment about keyboard layouts is irrelevant. No-one uses Dvorak because it never caught on. Neither is superior than the other. It's purely a matter of preferance.

And before you start campaigning against me as an open-source banger who is also a Microsoft fanboi, I just want you to know that i used ArchLinux and Gentoo as my main OS's, so i see more terminal and config-file work than you on your debian-based system do.

Reply Score: 1

RE[3]: Old news
by smitty on Fri 22nd Jul 2011 02:31 UTC in reply to "RE[2]: Old news"
smitty Member since:
2005-10-13

Microsoft cannot be held responsible for the mozilla foundation not wanting to port firefox to 64bit windows.

It has been "ported" for years, it just was never officially released or supported.

IE has been 64 bit since Win7, so essentially 2008 if you include prebuilds and beta's.

True, although no one uses it. And with IE9 the javascript compiler only works in the 32bit version, so performance is about 10x better in the 32bit version than the 64bit one which still interprets all the javascript code.


Adobe and Flash are the primary reasons that 64bit browsers aren't common on windows. It seems that the success of Windows 7 64bit have encouraged a lot of different plugin makers to create 64bit ports that are suddenly getting finished this year or next, so it's a good time for Mozilla to start looking into it as well.

Reply Score: 2

RE[4]: Old news
by Spiron on Fri 22nd Jul 2011 04:52 UTC in reply to "RE[3]: Old news"
Spiron Member since:
2011-03-08

Quite right, Adobe Flash is the prime cause, but this shouldn't have stopped browser-makers from making the 64bit browsers. If there was demand for a working 64bit flash plugin Adobe might have made one work before now. Either way it is a moot point, 64Bit flash is only really starting to be an idea with flash 11 beta. All i was trying to say was that the fault for a lack of 64bit Firefox wasn't the fault of Windows or Microsoft directly.

Reply Score: 1

RE[3]: Old news
by lemur2 on Fri 22nd Jul 2011 03:03 UTC in reply to "RE[2]: Old news"
lemur2 Member since:
2007-02-17

You're either an idiot or a troll, cause last time i checked a hell of a lot of the market leaders in software, thinking Photoshop and Maya, run on windows and not natively on linux.


This is, of course, a limitation of the software you mention (Photoshop and Maya), it is not a limitation of Linux.

For example, people used to disparage Linux for having no support for professional CAD, it didn't run AutoCAD. Well, now there is a professional CAD application for Linux.

http://www.bricsys.com/en_INTL/bricscad/index.jsp
http://www.bricsys.com/en_INTL/bricscad/comparison.jsp

There is still no Linux version of AutoCAD, but understand that this is a failing of Autodesk (who make AutoCAD), and not Linux per se.

As for Photoshop, meh ... if you want to manage digital photography on Linux use digikam, and if you want to create raster graphics, use krita. 99% of people wouldn't miss out on a single thing, except of course for the significant cost of a Photoshop license.

http://www.digikam.org/drupal/about?q=about/features
http://krita.org/features

Enjoy.

unless you've really been living under a rock for the last 5 years you would know that Windows supports 64bit, and its rather good actually.


Not really. Microsoft doesn't own the source code for many, many Windows drivers, the hardware OEMs do. If the nice, expensive laser printer model you have owned for quite some years happens to be out of production now, but it still works perfectly, and you upgrade to 64-bit Windows, there is a strong risk you may have to scrap your printer.

http://answers.microsoft.com/en-us/windows/forum/windows_7-hardware...

It happens with a variety of hardware, not just printers:

http://www.w7forums.com/no-64-bit-driver-workaround-t5505.html

IE has been 64 bit since Win7, so essentially 2008 if you include prebuilds and beta's.


Javascript performance is horribly broken in 64-bit IE.

http://ironvine.com/blog/index.php/archives/ie9-browser-wars/
"So, what’s the conclusion? Simple, IE9 64-bit is shockingly bad, and all the other browsers are, on the whole, pretty evenly matched."

Just keeping it real here.

Edited 2011-07-22 03:21 UTC

Reply Score: 4

RE[4]: Old news
by Spiron on Fri 22nd Jul 2011 06:18 UTC in reply to "RE[3]: Old news"
Spiron Member since:
2011-03-08

I will reply to just one of your posts but will answer both.

The fact that these pieces of software aren't on linux isn't really the issue. There are pieces of FOSS software that fulfill similar roles. But professionals are most likely to use these industry standards on their computers. For example, i have a number of professional photographer friends, and one of them uses FOSS tools and OS's. But when he wants to do top-of-the-line editing he switches back into Windows with Photoshop, because he thinks a better product for doing complex editing in. Another example, everyone making an full-feature animated film that is for cinematic release, either hollywood or independent, use Maya. It is the recognized standard for its field and nothing proponents of Blender say is going to change that anytime soon.

When I said that the 64Bit experience was good on Windows i was speaking from the point of view of someone that does have fairly new hardware, but also from the point of a developer. With any hardware made in the last 3 years it works pretty much as expected, and all the frameworks and system libraries work at least as good as anything on 32Bit versions. And while I cannot say that IE9 64Bit is a good browser i can say that the lack of a decent one from Microsoft should have left it open to other browser makers to make one, but Adobe Flash held them back with their refusal to make a 64Bit plugin. All i was trying to point out was that the ability had been there, it was just that no-one had come along with a good version.

As to the stigma that free != good, a suprising amount of people dislike open-source software because of this reason. It's not a problem with them, perse, it's a problem with how and what we are taught. For example, a shopper sees two speakers, slightly different in size and shape. One is being offered at $15 while the other is being offered for $10. The majority of people will pay the $15 instead of getting the $10 one. Why? Its not because they are different. It's not even really about the varying quality of either speaker. It's just that our society is a consumer-driven one and consumer-logic dictates that if its more expensive then it must be better quality. Because of its general truth among other areas of consumation people also apply this rule of thumb to software without really thinking. And because of the open-source movement this can be a rather large pitfall.

So the problem here isn't the populace view itself but more their unthinking application of it to every area of consumerism including those where it possibly shouldn't belong. Early education is the key here, because after people get to a certain age, and it differs for every person, their ideas get locked in place. But schools are generally resistant to the idea of linux/FOSS powered anything, and that is including the non-government ones. This can also be partly attributed to the stigma.

Another thing to keep in mind about people and software is that most people are not technically minded. A lot of people just use software that is preinstalled on their computer or comes on a disk. If they install other software it is generally other people doing it or the process is soo simple that it's trivial, ala Google Chrome. In such cases it is generally the adverts provided by google/facebook/other that are the reason they're installing the software. Infact, it has been estimated that only ~35% google chrome users care about the speed and other things. The other ~65% use it because they saw the add on google about using a "Faster Internet".

Edited 2011-07-22 06:25 UTC

Reply Score: 1

RE[5]: Old news
by lemur2 on Fri 22nd Jul 2011 07:09 UTC in reply to "RE[4]: Old news"
lemur2 Member since:
2007-02-17

I will reply to just one of your posts but will answer both. The fact that these pieces of software aren't on linux isn't really the issue. There are pieces of FOSS software that fulfill similar roles. But professionals are most likely to use these industry standards on their computers. For example, i have a number of professional photographer friends, and one of them uses FOSS tools and OS's. But when he wants to do top-of-the-line editing he switches back into Windows with Photoshop, because he thinks a better product for doing complex editing in. Another example, everyone making an full-feature animated film that is for cinematic release, either hollywood or independent, use Maya. It is the recognized standard for its field and nothing proponents of Blender say is going to change that anytime soon.


No problem with any of that. As I said, 99% of people would be able to use digikam/krita instead of Photoshop and miss out on absolutely nothing except the significant cost of a Photoshop license. In the case of pirated copies of Photoshop, all that people using digikam/krita would miss out on is the possibility that they could be fined. For 1% of people, using Photoshop could indeed possibly be a better option. No argument really.

When I said that the 64Bit experience was good on Windows i was speaking from the point of view of someone that does have fairly new hardware, but also from the point of a developer. With any hardware made in the last 3 years it works pretty much as expected, and all the frameworks and system libraries work at least as good as anything on 32Bit versions.


Once again, no argument. For a smallish percentage of machines in use, it is possible that 64Bit Windows is a better experience than 32Bit Windows. But once again, this would represent a minority.

And while I cannot say that IE9 64Bit is a good browser i can say that the lack of a decent one from Microsoft should have left it open to other browser makers to make one, but Adobe Flash held them back with their refusal to make a 64Bit plugin. All i was trying to point out was that the ability had been there, it was just that no-one had come along with a good version.


I don't know what this means. I run a perfectly good version of 64bit Firefox on my Linux desktop. This software was compiled not by Mozilla, but by the maintainers of the Linux dsitribution I run.

As to the stigma that free != good, a suprising amount of people dislike open-source software because of this reason. It's not a problem with them, perse, it's a problem with how and what we are taught. For example, a shopper sees two speakers, slightly different in size and shape. One is being offered at $15 while the other is being offered for $10. The majority of people will pay the $15 instead of getting the $10 one. Why? Its not because they are different. It's not even really about the varying quality of either speaker. It's just that our society is a consumer-driven one and consumer-logic dictates that if its more expensive then it must be better quality.


I can't speak for what antiscience they might "teach" you in American schools, but here in Australia a lot of people can in fact read product info and add up.

Because of its general truth among other areas of consumation people also apply this rule of thumb to software without really thinking. And because of the open-source movement this can be a rather large pitfall. So the problem here isn't the populace view itself but more their unthinking application of it to every area of consumerism including those where it possibly shouldn't belong. Early education is the key here, because after people get to a certain age, and it differs for every person, their ideas get locked in place.


Actually, I have made the observation that ideas about the value of goods are most certainly not locked in place at all. Amongst teenagers, as they turn into adults, there is a distinctly noticeable transition point when it comes to pass that the individuals in question have to pay for it themselves. That event changes their ideas about value-for-money pronto.

But schools are generally resistant to the idea of linux/FOSS powered anything, and that is including the non-government ones.


http://linux.slashdot.org/story/08/04/25/1159232/KDE-Desktops-For-5...

This can also be partly attributed to the stigma.


My oh my, you have drunk the koolaide haven't you?

Another thing to keep in mind about people and software is that most people are not technically minded. A lot of people just use software that is preinstalled on their computer or comes on a disk. If they install other software it is generally other people doing it or the process is soo simple that it's trivial, ala Google Chrome. In such cases it is generally the adverts provided by google/facebook/other that are the reason they're installing the software. Infact, it has been estimated that only ~35% google chrome users care about the speed and other things. The other ~65% use it because they saw the add on google about using a "Faster Internet".


Advertising is indeed a powerful influence that, sadly, can result in people getting ripped off. For this reason it behooves us to point out whenever we can that there are great alternatives in applications (software), that are perfectly legal but not advertised because they are free, which can save people heaps of money with no risk!

Actually, to fail to do so is a dis-service to humanity.

Edited 2011-07-22 07:17 UTC

Reply Score: 2

RE[5]: Old news
by zima on Fri 22nd Jul 2011 07:57 UTC in reply to "RE[4]: Old news"
zima Member since:
2005-07-06

Another example, everyone making an full-feature animated film that is for cinematic release, either hollywood or independent, use Maya. It is the recognized standard for its field and nothing proponents of Blender say is going to change that anytime soon.

Second sentence is fine (though, as far as industry practices go, that's also Maya on Linux), but as for "everyone" of the first...

http://www.blendernation.com/2010/02/04/%E2%80%98the-se...
http://en.wikipedia.org/wiki/The_Secret_of_Kells#Accolades
http://en.wikipedia.org/wiki/Plum%C3%ADferos

Reply Score: 1

RE[5]: Old news
by Lennie on Fri 22nd Jul 2011 15:23 UTC in reply to "RE[4]: Old news"
Lennie Member since:
2007-09-22

What kind of annoys me is why doesn't Adobe release a version of Photoshop for the Linux-desktop ?

I heared in Hollywood they already use Photoshop on Linux to do frame manipulation.

It might be running on Wine I don't know.

Usually one version older than the last does work on Wine, so I wouldn't be all that surprised.

Reply Score: 2

RE[6]: Old news
by zima on Wed 27th Jul 2011 02:50 UTC in reply to "RE[5]: Old news"
zima Member since:
2005-07-06

I haven't heard about Photoshop being used in such way.

Somewhat ironically, you might be thinking about Cinepaint. Which is a variant of... Gimp.

Edited 2011-07-27 02:55 UTC

Reply Score: 1

RE[3]: Old news
by lemur2 on Fri 22nd Jul 2011 04:20 UTC in reply to "RE[2]: Old news"
lemur2 Member since:
2007-02-17

Another example is that there are stigma's against free-stuff and quality, but that is a seperate disscussion.


IE is free.

Firefox 8 is leaner and faster.

http://www.techdrivein.com/2011/07/firefox-8-is-20-faster-than-fire...
"According to a recent study by extremetech.com, Firefox 8 is already 20% faster than Firefox 5 in almost every metric and has got a drastically reduced memory footprint as well."

http://www.extremetech.com/internet/89570-firefox-8-is-20-faster-th...

Considering that Firefox 4/5 was/is about level pegging when it comes to performance compared with Chrome, Opera and IE9, this is about to get interesting. Chrome 14 is also reportedly a speed improvement. However, the interesting bit for me is that I can't wait to see where desperate Firefox bashers are going to try to disparage Firefox next.

If judgement-ability-challenged people really do think that "free is rubbish", then they seriously need a re-think, especially when it comes to software.

Edited 2011-07-22 04:24 UTC

Reply Score: 2

RE[3]: Old news
by zima on Fri 22nd Jul 2011 07:22 UTC in reply to "RE[2]: Old news"
zima Member since:
2005-07-06

last time i checked a hell of a lot of the market leaders in software, thinking Photoshop and Maya, run on windows and not natively on linux.

Last time? In which millennium was that?

http://usa.autodesk.com/maya/system-requirements/ "For 64-bit Autodesk Maya 2012"
http://www.linuxjournal.com/article/9653
http://www.linuxjournal.com/article/4803

As to your comments about piracy, most users don't commit it

That's an awfully broad statement to make, have anything to back it up?

For my part I can say that in a place which, while not great, is still among the finest to live in (ex Comecon, late EU memberstate) ...it's damn hard to find a personal computer which wouldn't have something pirated on it (nvm how few short years ago this typically included OS - uptake of inexpensive laptops mostly fixed that; though still not as rapidly as expected, even machines from big manufacturers came with such smokescreens as "DOS2000" or a Linux liveCD/DVD - typically not-really-functional one / ignoring the hardware of the machine, drivers / not even booting to X).

And most places are definitely less prosperous than mine. Most PC users probably live in them; not in yours.

Reply Score: 1

RE[4]: Old news
by Spiron on Fri 22nd Jul 2011 09:22 UTC in reply to "RE[3]: Old news"
Spiron Member since:
2011-03-08

At the time i wrote my last comment i was not aware that Maya was on Linux. You have opened my eyes to that fact and i thank you.

When i talk about pirated material i was reply to another person who was talking specifically about software piracy. Movie/Music piracy is a much more common practice but was not what i was talking about. As to your comments about most computer users, if you take all the personal computers in china into acount then yes, software piracy is rampant, thanks to Microsofts softer policy on piracy to certain other companies. But discluding China, most of the computers in the world are bought with a legal copy of a Windows OS on them.

Reply Score: 1

RE[5]: Old news
by lemur2 on Fri 22nd Jul 2011 10:15 UTC in reply to "RE[4]: Old news"
lemur2 Member since:
2007-02-17

At the time i wrote my last comment i was not aware that Maya was on Linux. You have opened my eyes to that fact and i thank you.

When i talk about pirated material i was reply to another person who was talking specifically about software piracy. Movie/Music piracy is a much more common practice but was not what i was talking about. As to your comments about most computer users, if you take all the personal computers in china into acount then yes, software piracy is rampant, thanks to Microsofts softer policy on piracy to certain other companies. But discluding China, most of the computers in the world are bought with a legal copy of a Windows OS on them.


I was talking about pirated copies of Photoshop. I have read that Photoshop is the most-pirated software.

As for your observation that "most of the computers in the world are bought with a legal copy of a Windows OS on them" ... while true for desktop computers (and only desktop computers), this too is a great dis-service to people.

Edited 2011-07-22 10:18 UTC

Reply Score: 2

RE[3]: Old news
by andih on Fri 22nd Jul 2011 11:30 UTC in reply to "RE[2]: Old news"
andih Member since:
2010-03-27

You say you have used gentoo as main system? What are you using now? Windows? omgwtf, what happened?

And unless you've really been living under a rock for the last 5 years you would know that Windows supports 64bit, and its rather good actually.

Sure, but it took its time. MS still suggest using 32bit for a lot of MS programs. ;)

There are many reason why Windows is still popular, the main one being that people are used to it.

Congrats, that was exactly my point.
Its popular because people are used to it, not because its good. Just as querty vs dvorak! If you really believe that dvorak is not a hugely improved for use on modern keyboards, you have clearly not tried it or know anything about it. If you use a early 1900 typewriter, your point is valid though :p You know why querty became standard I guess? http://upload.wikimedia.org/wikipedia/commons/f/f8/Typebars.jpg

As you say people like whats familiar so querty will still be the most used keyboard for many years to come. Windows too, sucking (or not).
Dvorak querty comparison was not irrelevant as you probably see. I was trying to underline that popular is not the same as good.. and I see we agree ;)

To anybody wanting to try dvorak:
setxkbmap dvorak
or for dvorak in on other than standard "en":
setxkbmap -layout LANGUAGE -variant dvorak


There are other reasons, like the stigma's attached to linux and other alternate OS's, for example Linux still being viewed as the 'nerds' os and thus people either thinking they are not smart enough for it OR them holding nerds in contempt. Another example is that there are stigma's against free-stuff and quality, but that is a seperate disscussion.

Well said. I couldnt agree more. It's sad though, I really hope this changes.

Reply Score: 1

RE[3]: Old news
by bassbeast on Sat 23rd Jul 2011 10:49 UTC in reply to "RE[2]: Old news"
bassbeast Member since:
2007-11-11

I'll get hate for daring to say this but WTH. You wanna know why Linux has a bad rep? As a retailer I'll be happy to tell you and that is because the DRIVERS SUCK and as long as Torvalds is in charge it'll stay that way!

OSX, BSD, Windows, Solaris, even OS/2, what do these have in common? A stable hardware ABI and guess what? The drivers WORK. As someone on this very site once told me "When drivers fail on Linux geeks just get a knowing smile and say 'yeah it does that, you need to' followed by a big pile of CLI gunk" and THAT is the problem! Do you think I can afford to give away support for life, or that my customers would be happy when the 6 month upgrade death march takes out their sound? Hell there isn't even a "roll back drivers" button which Windows has had for a fricking decade! And NO actual home users will NOT go trawl your forums, nor will they mess with huge CLI messes!

Honestly I like Linux, I really do. It has several nice GUIs, plenty of free software, but frankly my customers can get that same free software on Windows WITHOUT all the broken drivers! The sad part? It isn't a technical problem, it is ego and dogma. Linus said in 93 he didn't like ABIs because they wouldn't let him do whatever he wanted with the kernel. That was fine in 93 when the only ones using Linux besides him was a few geeks on IRC, but it ain't 1993 anymore!

You have an OS that updates at frankly an INSANE pace yet something as fundamental as drivers is horribly broken! Yet many in the community will support Linus being an @ss because "ZOMG somebody might release a non free driver ZOMG!" while ignoring that companies like Nvidia ALREADY release non free drivers, that doesn't stop devs from releasing free drivers. As it is now a company either throws themselves on the mercy of the kernel devs (unacceptable) or they have to "pull an Nvidia" and keep an entire team of devs who do NOTHING but constantly fix the problems Linus' kernel tweaking causes. Or they can just choose the third option which is to ignore your OS. Hmmm...which do you think most will choose?

I don't like paying for Windows licenses and I would love to give my customers the choice of Linux or Windows. But until I can get a minimum of 7 years of support WITHOUT the upgrade death march (this is around half of Windows support cycles BTW) or you have a stable ABI to where upgrades can happen without borking drivers? Then sadly I can't carry your product. I have a rep to maintain and selling machines that break every 6 months doesn't help it any. No sale.

Reply Score: 1

RE[4]: Old news
by smitty on Sat 23rd Jul 2011 11:18 UTC in reply to "RE[3]: Old news"
smitty Member since:
2005-10-13

OSX, BSD, Windows, Solaris, even OS/2, what do these have in common? A stable hardware ABI and guess what? The drivers WORK.

LOL - great rant there. Here's where you're wrong:

OSX - they pick out the frickin hardware, for crying out loud. And, by the way, Apple changes their APIs ALL THE TIME.

BSD - you're really arguing the driver situation on BSD is better than Linux? Really?

Windows - definitely has the advantage in terms of the average home user. But claiming they always work is dead wrong. I've seen driver issues at my work that took MONTHS to resolve, and yes, we were using drivers from the hardware manufacturer. This was really EXPENSIVE hardware too, and we finally just decided to buy from another company we were having so much trouble. And people in general had all sorts of problems moving from XP -> Vista, because manufacturer's no longer supported the device.

And if you think you can get manufacturer's to put as much effort into an OS with 2% marketshare as they do for one with 90%+ marketshare, you're delusional.

Solaris, OS/2 - LOL WUT?

Edited 2011-07-23 11:22 UTC

Reply Score: 2

RE[4]: Old news
by lemur2 on Sat 23rd Jul 2011 11:46 UTC in reply to "RE[3]: Old news"
lemur2 Member since:
2007-02-17

You have an OS that updates at frankly an INSANE pace yet something as fundamental as drivers is horribly broken! Yet many in the community will support Linus being an @ss because "ZOMG somebody might release a non free driver ZOMG!" while ignoring that companies like Nvidia ALREADY release non free drivers


Hardly. This is only a problem for nVidia, who insist on not releasing programming specifications for their hardware.

I have an ATI or Intel graphics card on all my systems, and there is no problem whatsoever, they all work like a charm. Intel write open source drivers for Linux, and ATI released programming specifications

http://www.x.org/docs/AMD/

so that open source developers (at Xorg) can write open source drivers for ATI graphics.

http://www.x.org/wiki/RadeonFeature

There is an open source driver Linux for nVidia graphics, but because there are no programming specifications, the development team for this driver must resort to reverse engineering efforts, and so this driver is way, way behind as a consequence.

This in effect means that systems with nVidia graphics are not made for Linux. nVidia-based systems are simply not well suited to run Linux.

If you want to run Linux well, do it on a system with Intel or ATI graphics.

Edited 2011-07-23 11:50 UTC

Reply Score: 2

RE[5]: Old news
by zima on Mon 25th Jul 2011 10:03 UTC in reply to "RE[4]: Old news"
zima Member since:
2005-07-06

This in effect means that systems with nVidia graphics are not made for Linux. nVidia-based systems are simply not well suited to run Linux.

If you want to run Linux well, do it on a system with Intel or ATI graphics.

It's somewhat more complex than that. After all, one of more notable success stories of desktop Linux is its adoption in CGI / ~Pixar stuff (what is probably the main reason Nvidia provides decent drivers for quite some time)

Reply Score: 1

RE[4]: Old news
by zima on Mon 25th Jul 2011 08:59 UTC in reply to "RE[3]: Old news"
zima Member since:
2005-07-06

a minimum of 7 years of support WITHOUT the upgrade death march (this is around half of Windows support cycles BTW)

Not only longevity of XP is an aberration MS tried to avoid; mainstream support ended over 2 years ago. And extended support till 2014 applies only to XP SP3, 4 years after its release. Which is fairly comparable to, say, desktop Ubuntu LTS releases (or other big "slow" ones; any problems are self-inflicted when you purposefully choose versions on cutting edge cycle)

Reply Score: 1

RE[2]: Old news
by BluenoseJake on Fri 22nd Jul 2011 12:42 UTC in reply to "RE: Old news"
BluenoseJake Member since:
2005-08-11

I've never had to use piracy to make Windows do what I like, between OSS software and MS's own software available for free (Express editions of VS and SQL Server, for example) , but hey, if you need to find a rationalization for your piracy, it's cool, perhaps you just didn't know better.

Reply Score: 2

RE[3]: Old news
by andih on Mon 25th Jul 2011 10:23 UTC in reply to "RE[2]: Old news"
andih Member since:
2010-03-27

Was using windoze at that time, that explains it all.. lol

Reply Score: 1

RE[2]: Old news
by BallmerKnowsBest on Mon 25th Jul 2011 11:52 UTC in reply to "RE: Old news"
BallmerKnowsBest Member since:
2008-06-02

lol yeah agree ;)

I really don’t get what people see in windows anyway.. Windows is for noobs, really!


As opposed to Linux, which is apparently for semi-literate teenagers who believe that "lol" is punctuation.

Reply Score: 2

never worked
by Shannara on Thu 21st Jul 2011 16:01 UTC
Shannara
Member since:
2005-07-06

64bit version of firefox never ran in vista and not in windows 7 ... perhaps Mozilla should actually test their product before release ... stop pulling a google.

Reply Score: 1

RE: never worked
by malxau on Thu 21st Jul 2011 16:16 UTC in reply to "never worked"
malxau Member since:
2005-12-04

64bit version of firefox never ran in vista and not in windows 7 ... perhaps Mozilla should actually test their product before release ... stop pulling a google.


AFAIK Mozilla never released a 64 bit version of FireFox for Windows. Do you have a link for that?

Reply Score: 2

RE[2]: never worked
by jgagnon on Thu 21st Jul 2011 18:05 UTC in reply to "RE: never worked"
jgagnon Member since:
2008-06-24

On a related note, is there really a need for a 64-bit web browser at present or in the near future? Two GB is a LOT of web page data and apps (which is the normal available to 32-bit user apps in Windows). I suppose as more and more applications make their way into browsers there will be more need for increased address space.

I'm not saying I have any objections to a 64-bit browser, because I don't, I'm just well aware that 32-bit Windows apps run just fine in 64-bit Windows.

/shrug

Reply Score: 1

RE[3]: never worked
by Shannara on Thu 21st Jul 2011 19:28 UTC in reply to "RE[2]: never worked"
Shannara Member since:
2005-07-06

A huge need so far. With all the bloated web pages and crappy flash advertisement, there is definitely a need.

Reply Score: 2

RE: never worked
by Drumhellar on Thu 21st Jul 2011 18:26 UTC in reply to "never worked"
Drumhellar Member since:
2005-07-12

I beg to differ.
I've run 64-bit builds of Firefox in XP, Vista, and 7, though not extensive. Having no plugins sucks.

Reply Score: 2

RE[2]: never worked
by _xmv on Thu 21st Jul 2011 19:07 UTC in reply to "RE: never worked"
_xmv Member since:
2008-12-09

plugins are in a container now and can be 32bit

Reply Score: 2

RE: never worked
by _xmv on Thu 21st Jul 2011 19:06 UTC in reply to "never worked"
_xmv Member since:
2008-12-09

ive been using the 64bit version of the nightly and it works on my w7 x64.
maybe you are running vista or w7 32bit?

Reply Score: 2

RE[2]: never worked
by Shannara on Thu 21st Jul 2011 19:28 UTC in reply to "RE: never worked"
Shannara Member since:
2005-07-06

definitely not 32bit. I have a 64bit solely for programming ;) Granted, the last nightly build I tried was last year ... they may have a working version now?

Reply Score: 2

RE: never worked
by mfarmilo on Fri 22nd Jul 2011 00:06 UTC in reply to "never worked"
mfarmilo Member since:
2009-02-28

64bit version of firefox never ran in vista and not in windows 7 ... perhaps Mozilla should actually test their product before release ... stop pulling a google.


What ? Mozilla have never released a 64-bit Windows version yet. That's the whole point of this article - Firefox 8 will be the first version to have a 64-bit version. Some people have done private 64-bit builds before now. If one of those didn't work for you, please don't start lecturing Mozilla on testing properly before releasing, when it's not even an official release you were using.

Reply Score: 2

Current limitations
by smitty on Fri 22nd Jul 2011 02:26 UTC
smitty
Member since:
2005-10-13

Firefox is compiled large address aware, which means it can address up to 4GB on a 64bit windows OS, or 2-3GB on 32bit depending on whether the 3GB boot flag is set.

This is probably enough for the vast majority of people, but those who tend to open tons of tabs that are very image heavy can run into problems. A single 1080p image can take a decent chunk of memory when decoded, and hundreds of tabs can hold a lot of content. The additional registers available should help with javascript performance, however, which is a nice benefit for everyone.

Reply Score: 3

The 64-bit experience is a LIE
by AndrewZ on Fri 22nd Jul 2011 14:08 UTC
AndrewZ
Member since:
2005-11-15

I'm going to take a stand here and point out some harsh realities. First of all, the idea that you can even tell whether a system is running on 32-bit or 64-bit is a lie. Plain and simple. For all intents and purposes you can't tell the difference. Whether running on Linux or running on Windows.

Most 32-bit applications on Windows have a 2 GB address space. This is enough for most applications except CAD, PhotoShop, and Databases. It is certainly enough for 99% of non-professional users. 2 GB is certainly enough for 99.9% of web browsing situations.

Porting a 32-bit app to 64-bit gains you at most 5 - 15% speedup, which is mostly due to optimizing for more registers and not due to more RAM or memory addressing. 5 - 15% is generally not fast enough to make a big difference in the user experience. This is about as much increase as using hyperthreading. I.E. not a whole lot.

And as for this X86-32 vs X86-64 argument the same thing applies. It's all in your mind. A user can't tell the difference. You can't tell the difference. If you ran the same Linux distro on different architectures you would not be able to tell the difference, all other things being equal, such as RAM, disk, and comparable CPU.

And as for this business of X86-* being difficult to code to? 90% of the time that's also BS. Any applications written in a high level language doesn't need architecture specific optimization. Can you come up with instances where you needed to do some fixup in C or C++? Sure. That's the exception to the rule.

Let's face the harsh reality that CPUs are here not because they give us a better, difference, or even distinguishable experience. They exist for economic reasons. ARM owns the laptop because it uses less power. And because Apple could make money off of it. Atom is now introduced not because it is better or 'different' but because Intel can make money off of it. SPARC is fading away not because it couldn't run 64-bit apps, it did something like 12 years ago. Alpha was awesome but it went away. Not because it was killer, it was, but it was more expensive than X86.

I could go on here but the fact remains. For 99% of desktop uses, 32-bit vs 64-bit is irrelevant.

Ultimately Firefox is 64-bit because it was time to happen. Everything needs to go 64-bit because that's where things are heading. Does it make a big difference to the basic user? No. High end professional workstation users? Yes. Servers? Yes. Firefox users? No.

Reply Score: 2

RE: The 64-bit experience is a LIE
by Alfman on Fri 22nd Jul 2011 21:06 UTC in reply to "The 64-bit experience is a LIE"
Alfman Member since:
2011-01-28

AndrewZ,

"I could go on here but the fact remains. For 99% of desktop uses, 32-bit vs 64-bit is irrelevant."

I agree with your overall post, I've been saying that for eons. It's more about "cool factor" than anything else. Many people don't realize that 64bit (or 128bit, gasp) won't change their experience - certainly not until apps use more ram. AMD took advantage of the incompatible upgrade from 32->64 bit to make other architectural improvements, but these have nothing to do with 32/64 bit in principal.


I do want to counter your following claim though:

"And as for this business of X86-* being difficult to code to? 90% of the time that's also BS. Any applications written in a high level language doesn't need architecture specific optimization."

I have benchmarked certain algorithms which work better for register-starved processors like the x86 compared to other algorithms on other processors.

The technical reason is, on the x86, the cost of accessing local variables (which don't fit in registers), is the same as the cost of dereferencing arbitrary pointers (both are stored in L2 cache). This has profound implications as to the choice of the most optimal high level algorithms.

Another difference is with limited registers, it can make more sense to recompute a value from registers every iteration than to pull in a pre-computed value off the stack every iteration.

I realize this is well below the level most developers operate. Code just needs to be good enough, anything more is overkill.

"Can you come up with instances where you needed to do some fixup in C or C++? Sure. That's the exception to the rule."

I think the potential for optimization is almost always there, but the NEED for it is the exception to the rule.

Reply Score: 2