Slow news day today, so here is a new poll. Vote for the architecture that Apple should switch to, if such a switch is necessary. Please note that we did not include options like MIPS, SPARC or Crusoe as they do not make much sense at this point as suitable candidates for different reasons each. But the most popular choices are listed and awaiting your vote!Please note that the early releases of AMD Opteron and especially Itanium2 based workstations will be very expensive, so they do not justify well the “Macs for everyone” idea. As for the Motorola G5, this seems to be… a mythical CPU rather than a well guarded secret, therefore we decided not to include it in this poll. In fact, the G5 can’t be considered as “switch” for Apple anyway, as it would have been the natural evolution for the Mac.
Please vote with a mindset of what would make sense and what would be best for the company and the users, not with a mindset stuck in utopia or… zealotry.
Please also note that to view and vote for the poll, you’ll need javascript (for reasons we have explained in the past).
The poll is now closed. Thanks everyone for voting!
Everyone is acting like the CPU choice is all that important for speed…
The G4 has pretty good IPC and all that’s missing from the Mac design is a decent chipset with no bottlenecks like IBM, Sun and HP do when designing large servers.
Minor improvements to the G4/G5 design are all that’s needed.
Switching chipset architectures, multiple RAM banks, wide datapaths, large caches is what you want to have if you want to support >2 CPUS efficiently, which is what Apple should be trying to do.
Nobody would argue with a properly implemented quad-G4 design, believe me…
Efficient and powerful software is also important and Apple needs to work a bit there, too.
On the other hand, decent chipsets for x86 CPUs already exist, maybe Nvidia should be enlisted… not that it’s easy modifying an x86 chipset for PPC use…
In any case, I don’t foresee any move to x86 soon. I love the POWER4, though that will not happen anytime soon, either (I mean with the low-cost one, forget the “proper” one).
I just expect better versions of the G4 and maybe the G5 soon, that’s all.
D
jbolden1517:
On very rare occasion do you see a desktop PC with 512mb of RAM, and even more rare is one with over 512mb, and truely, at this point, people don’t really even need 512.
The P4 has more cache than Athlons, so I guess an Athlon doesn’t have much cace iether, hmm?
The fact is, Intel MADE the prcoessor scaleable in terms of operations/clock, and they did that on purpose, the P4 is able to scale much much more than the Athlon can, and thus, can be a much faster processor.
The only reason you call them flaws is because of your obvious bias against Intel. Trollers such as yourself need to either leave, or learn to embrace technology for what it is, not hate it because company X made it, or love it because company Y produced it.
Who said anything about Athlon relative to Intel? I’ve been comparing Pentium III to the Pentium IV. And how could I hate products because Intel made them when I’m singing the praises of the Itanium2. You can feel free to get the last word in.
Does anyone else find it interesting that apple is allowing a
distributor to install linux on apple machines. Mr Jobs certaintly did what he could to avoid BeOS from coexisting on the mac platform.
I believe that Apple’s reaction to this will reveal its future processor roadmap. If apple lets this go on then i think that we can infer a greater role for linux on the power pc and perhaps a greater role for IBM in the world of apple.
http://news.com.com/2100-1040-957138.html?tag=fd_top
ALL I want is Apple with a Power4 desendant. GO Big Blue by btyMoto!!!!!!! not too much to ask, Keep those x86 for my games 🙂
“You are a potential immigrant aren’t you? In essence, you are a potential buyer of the product “Dutch society”. That still doesn’t make you eligable to vote on our policies.”
Once again, that is a bad analogy. The Netherland government is not actively recruiting me and I’m not interested in being their citizen. Plus you’re comparing computer cpus to real laws? That’s ludicrous. Would you compare the destruction of old computers to euthanasia for the old? It’s not the same.
“One reason might be that you don’t want to migrate, but want to have us do something you like. The same is probably true here.”
No it’s not. I’m a proud American. There’s nothing you can do to make me immigrate. To immigrate is to lose myself and my identity. Immigration is a life-altering decision. Choosing computer cpus is just economics and convenience. The truth is you can do everything you want and ever need on either the PC or the Mac. They’re equally capable. The only difference is the experience. (i.e. how the OS feels, how long it takes to get something done, how intuitive the interfaces are, etc.)
“How many people voted for x86 because they want to use MacOS X on a cheap PC?”
Probably all PC users. We all know Mac hardware is more expensive and inferior. I want the software that’s all.
“Of course, the poll itself is already wrong since Eugenia doesn’t understand the difference between a Power4 and a PowerPC based on Power4 technology.”
Power4 and PowerPC can be lumped together because they are similar. It’s not wrong. It’s a generalization. Just like lumping XScale and StrongArm together.
“I’ve been hearing that sort of stuff for 20 years. 32 bits = 4gigs of addressable memory at most using a simple pointer scheme (and usually more like 512 megs). Systems today use over 512 megs; and very soon will have over 4 gigs. Its going to take years to make the transition to 64 bit memory pointers and the sooner start the better. Its not a good idea to wait until there is a crippling need.”
The 32-bit refers to the data bus width, not the address bus width. For example, the 80286 was a 16-bit cpu but it could address 16MB of ram. How is that possible? According to your calculations, there’s a 64KB limit.
The 80286 had a 24-bit address bus width. 2^24=16MB. The same with modern 32-bit Intel cpus. They have a 36-bit address bus width. 2^36 is more memory than you can shake a stick at. The 512MB limit you’re talking about is not a limitation of the CPU. It’s a limitation of the chipset. Some desktop chipsets that I know of can address 768MB now. I bet there are those out there that can address more that I don’t know of.
One thing you didn’t think of is that 64-bit CPUs can mean less efficiency. There’s going to be more loss of usable memory because of padding and alignment. And a greater penalty when accessing single bytes, etc.
jbolden: I’ve been hearing that sort of stuff for 20 years. 32 bits = 4gigs of addressable memory at most using a simple pointer scheme (and usually more like 512 megs). Systems today use over 512 megs; and very soon will have over 4 gigs. Its going to take years to make the transition to 64 bit memory pointers and the sooner start the better. Its not a good idea to wait until there is a crippling need.”
The 32-bit refers to the data bus width, not the address bus width. For example, the 80286 was a 16-bit cpu but it could address 16MB of ram. How is that possible? According to your calculations, there’s a 64KB limit.
Actually the 80286 is a great example of the difference between a simple and complex memory scheme. To get 16megs of addressable memory all pointers were two two registers in size. Memory was broken up into descreate 64KB pages and there were 8 bits identifying page number (the 16 bits were used for physical page number, logical page number — the which even allowed for something very much like virtual ram for systems like OS/2). 64K pages were not a real problem for executables but for data things were very tricky. Once an application needed even a single data byte it needed to grab a full 64K (and yes this was a big deal when computers had 2 or less megs). Rolling over generally meant grabbing another page. Effecient allocation of memory became impossible.
The transition to the 80386 which on Windows allowed for 4K pages made memory management much easier and paging became “natural again”.
The 80286 had a 24-bit address bus width. 2^24=16MB. The same with modern 32-bit Intel cpus. They have a 36-bit address bus width. 2^36 is more memory than you can shake a stick at. The 512MB limit you’re talking about is not a limitation of the CPU. It’s a limitation of the chipset. Some desktop chipsets that I know of can address 768MB now. I bet there are those out there that can address more that I don’t know of.
There are; but the Intel / Windows setup with 4k pages was really only designed to use 512 megs. They attacked this problem a long time ago and added hackish features so everyone hit this limit “softly”. But if you think back 6+ years you’ll remember weird limits like: Linux 128 meg swap files, 512 meg of ram limits, maximum virtual memory limits in windows of 512 megs… These were coming from limitiations in the original implementation. BTW 2^36 bytes = 64 gigs which is still not enough very long term. There are already many machines (not PCs) with more ram than this. There are databases much larger than this.
One thing you didn’t think of is that 64-bit CPUs can mean less efficiency. There’s going to be more loss of usable memory because of padding and alignment. And a greater penalty when accessing single bytes, etc.
Of course. But applicions given the speed of modern CPUs relative to the speed of base memory the penalty for accessing single bytes in a non sequential manner is already huge (you pull a ton of no-ops). 64 bits isn’t going to make that worse; while the more cache on 64 bit chips will do a great deal to reduce cache misses.
“To get 16megs of addressable memory all pointers were two two registers in size. Memory was broken up into descreate 64KB pages and there were 8 bits identifying page number (the 16 bits were used for physical page number, logical page number — the which even allowed for something very much like virtual ram for systems like OS/2).”
This is totally wrong. Memory was broken into segments associated with a handle called a selector. Each selector pointed to a data structure (LDT or GDT entry) that described the beginning and the end of a segement. Segments could overlap. From within a segment you could use a NEAR pointer(2-byte, one register size) to address just the 16 bit offset. You only needed FAR pointers(4-byte, two register size) when addressing between segments.
“64K pages were not a real problem for executables but for data things were very tricky. Once an application needed even a single data byte it needed to grab a full 64K (and yes this was a big deal when computers had 2 or less megs). Rolling over generally meant grabbing another page. Effecient allocation of memory became impossible.”
Totally wrong again. What you’re describing is sort of like virtual memory with a 64KB page. The 286 did not have paging capabilities. The MOV reg, mem opcode took 5 cycles to complete. Explain to me how an 8Mhz cpu could “grab” 64KB in 5 cycles. And where the heck would it put it since it neither had a TLB nor cache memory. That’s ridiculous. The only performance penalty you get from accessing a new segment is that the LDT or GDT entry for the selector had to be loaded into the hidden registers. Each entry was only 8 bytes long.
“There are; but the Intel / Windows setup with 4k pages was really only designed to use 512 megs. They attacked this problem a long time ago and added hackish features so everyone hit this limit “softly”.”
You can’t count Windows memory limitations as a problem of the CPU. Windows has different limits because of the way it divides the 4GB virtual address space up. It has nothing to do with the CPU.
Even the 386 CPU can have 1K entries in the base table and 1K entries for each leaf node. 1K*1K*4KB per entry=4GB. This is not even counting the newer 36-bit address space. Read this page to reduce your confusion. -> http://people.freebsd.org/~jhb/386htm/s05_02.htm
“But if you think back 6+ years you’ll remember weird limits like: Linux 128 meg swap files, 512 meg of ram limits, maximum virtual memory limits in windows of 512 megs… These were coming from limitiations in the original implementation.”
It’s not a cpu limitation. My old Pentium can mount a little under 2GB (file size limit on Linux, nothing to do with the CPU) of swap space on my 30GB hdd with a recent version of the Linux kernel. Once again, you can’t count software limitations as a problem of the CPU.
“BTW 2^36 bytes = 64 gigs which is still not enough very long term.”
You got to be kidding. Name me a desktop with 64GB of ram. It will be a long time before anyone Mac or PC needs 1GB of ram not to mention 64GB of ram. Besides that, it’s trivial to increase the address bus size. You can have 32-bit data bus and a 64-bit address bus if you want.
“There are already many machines (not PCs) with more ram than this. There are databases much larger than this.”
All servers, clusters, enterprise boxes, or specialty boxes. Does not apply here. We’re talking about whether or not 32-bit is good enough for Macs. It is.
“Of course. But applicions given the speed of modern CPUs relative to the speed of base memory the penalty for accessing single bytes in a non sequential manner is already huge (you pull a ton of no-ops). 64 bits isn’t going to make that worse; while the more cache on 64 bit chips will do a great deal to reduce cache misses.”
You’re assuming 64-bit CPUs will have more cache entries. There’s the important distinction here. 64-bit CPUs will need caches that are twice as wide. Having more cache memory in terms of bytes won’t guarantee better performance. You need more cache entries. Plus it’s harder to have higher orders of associativity when you have a wider data bus. (Not impossible, just harder which is the same as saying more expensive in terms of dollars.)
Plus, there’s no reason you can’t put more cache into a 32-bit CPU. Cache is not limited to 64-bit CPUs. On the other hand, you’re guaranteed larger wasted memory becuase of alignment and padding on 64-bit cpus.
This is totally wrong. Memory was broken into segments associated with a handle called a selector. Each selector pointed to a data structure (LDT or GDT entry) that described the beginning and the end of a segement. Segments could overlap. From within a segment you could use a NEAR pointer(2-byte, one register size) to address just the 16 bit offset. You only needed FAR pointers(4-byte, two register size) when addressing between segments.
You seem to be describing the 8088 memory model with 64K “pages” overlapping every 16 bytes not the 80286 model, i.e. the 20 bit model not the 24 bit model.
Explain to me how an 8Mhz cpu could “grab” 64KB in 5 cycles.
It had nothing to do with grabbing it was the ability to use cycle through quickly in assembly language loops.
It’s not a cpu limitation. My old Pentium can mount a little under 2GB (file size limit on Linux, nothing to do with the CPU) of swap space on my 30GB hdd with a recent version of the Linux kernel. Once again, you can’t count software limitations as a problem of the CPU.
Sure I can; that’s exactly my point. The limitations of the CPU lead to limitations of OSes. The limitations of the 80386 model are starting to effect PCs and will effect them much more drastically as this decade continues. For windows systems this might be tolerable; but I see no reason for Mac if they are going to do a CPU switch anyway to pick up these problems.
Jeff: BTW 2^36 bytes = 64 gigs which is still not enough very long term.” “There are already many machines (not PCs) with more ram than this. There are databases much larger than this.”
Malachai: You got to be kidding. Name me a desktop with 64GB of ram. It will be a long time before anyone Mac or PC needs 1GB of ram not to mention 64GB of ram. Besides that, it’s trivial to increase the address bus size. You can have 32-bit data bus and a 64-bit address bus if you want.
All servers, clusters, enterprise boxes, or specialty boxes. Does not apply here. We’re talking about whether or not 32-bit is good enough for Macs. It is.
I bought a laptop 20 months ago with a 1/2 gig of ram. I’m upgrading my wife’s powerMac to 1.5 gigs and she isn’t a power user by any means. If I were going to buy a desktop today there is no way I’d have under 2 gigs of ram in it. Nothing speeds up a computer more than not having enough ram to not need to swap out applications and data while multitasking. The whole point of a switch the pentium 4 would be to gain a slight CPU boost. Its completely unreasonable to assume Apple would go through that kind of pain and sell computers short on ram.
Just pulling my process list:
i.e. 70 megs
outlook 22 megs
acrobat 17.5 megs
word (no documents loaded) 17.1 megs
3 terminals 7 megs each
about 15 services 4-6 megs each
I”m at 200 megs of ram and I’m not even doing anything with my system its virtually idle.
And yes I play with multi-gig databases all the time on this laptop, in access!
And this is today’s technology. Mac is moving towards a full 3D environment; odds our their memory needs will be much greater than window’s needs. I have trouble seeing what most people are going to do with CPU speeds in excess of a dual 1.25gig G4 much more than I have trouble seeing what a desktop user is going to do with 8 gigs of ram. Further I’d expect memory to go up quite rapidly. When I bought my 386-40 12 years ago 1-2 megs was average (and the price per system was much higher than today); today consummer systems ship with 256 times that much on average and if you compare the same price points more like 512 times as much. 128k was average in the 85; and 16k was average around 80. That is memory need is following moore’s law and growing at 40% annually compouunded. The jump from 1 gig to 64 gigs is 6 doubles or about 9 years. For high end desktops (about 2 gigs today) you only have 5 doubles or about 7 years.
I don’t know how old you are; but the people who thought the 640k limit was no big deal made almost exactly the same argument. When 64k was very comfortable and 128k excessive the idea of going over a 1meg seemed insane. But the people who noticed that home PCs had gone from 4k to 128k very quickly were worried.
Plus it’s harder to have higher orders of associativity when you have a wider data bus
This line I didn’t understand. What does it mean?
“You seem to be describing the 8088 memory model with 64K “pages” overlapping every 16 bytes not the 80286 model, i.e. the 20 bit model not the 24 bit model.”
I’m not. The 20-bit memory model is called “real mode” addressing and had a limit of 1MB. What I’m talking about is the “protected mode” access. Segments can overlap. Heck. If you set the same base and limit values in the LDT/GDT for two different selectors, two segements can map to the same physical memory location.
“It had nothing to do with grabbing it was the ability to use cycle through quickly in assembly language loops.”
You said:
“64K pages were not a real problem for executables but for data things were very tricky. Once an application needed even a single data byte it needed to grab a full 64K (and yes this was a big deal when computers had 2 or less megs).”
First off, 64KB pages is wrong. The 286 allowed division of memory into variable-sized segments and not fixed sized pages. Second the statement about the need to “grab” 64KB for each byte access is wrong. I proved that by using the instruction timings from the Intel programmer’s reference book. A memory access to copy from memory to a register took 5 cycles. If it had to “grab” 64KB for each access, then it would have to be a magical CPU because a 16-bit CPU running at 8Mhz cannot move 64KB of memory in 5 cycles. Furthermore, this has nothing to do with loops. This is moving one small word of memory.
“Sure I can; that’s exactly my point. The limitations of the CPU lead to limitations of OSes. The limitations of the 80386 model are starting to effect PCs and will effect them much more drastically as this decade continues. For windows systems this might be tolerable; but I see no reason for Mac if they are going to do a CPU switch anyway to pick up these problems.”
Once again. all the limits you’ve mentioned are not CPU limitations. CPUs from the 386 and up could fully support the entire 4GB address space. The limitations that you’ve mentioned are SOFTWARE and CHIPSET limitations. (Not the other way around.) Problems with their respective OSes and chipsets. There’s nothing stopping a good OS, say Mac OS X, from using the entire memory space. Up to 64GB on 36-bit capable iterations of the x86 provided the chipset supports it.
“I bought a laptop 20 months ago with a 1/2 gig of ram. I’m upgrading my wife’s powerMac to 1.5 gigs and she isn’t a power user by any means. If I were going to buy a desktop today there is no way I’d have under 2 gigs of ram in it.”
The average today is only 256MB. If what you say is true, then you’re not the average user and cannot be counted into the most category. I do Java/C/C++ development and I don’t have any problems in 256MB of ram. (I mean even with the bloated Forte running as my primary IDE.) I’ve built Linux kernels, gcc 3.2, etc. No problems. It’s not expensive to bump my machine to 512MB or even higher, but I don’t need it. Most people don’t either.
“Its completely unreasonable to assume Apple would go through that kind of pain and sell computers short on ram.”
You wouldn’t be short at all. P4s and their motherboards can use just as much or more ram than your current PPC iterations. Plus, you get faster ram like DDR and RDRAM. And a quad-pumped bus. If you want so much ram, buy the P4 server boards. Here is a dual Xeon 1U with 16GB of ram. http://www.siliconmechanics.com/sm-1270.php Mac OS X would scream on that. Obviously not a fair comparison to Apple desktops, but it serves to point out that the CPU can handle the gobs of memory. Plus compare dual 2.5Ghz to dual 1.25Ghz. The Apple box will get beat down with the much slower SDRAM and half the Ghz rating.
“Just pulling my process list:
i.e. 70 megs
outlook 22 megs
acrobat 17.5 megs
word (no documents loaded) 17.1 megs
3 terminals 7 megs each
about 15 services 4-6 megs each
I”m at 200 megs of ram and I’m not even doing anything with my system its virtually idle.”
Still a long way from 1GB not to mention 64GB. Can you even afford 64GB of ram? A stick of 1GB is $200. 36GB is $7200. LOL. It will be a while before the average Joe could even afford 1GB with RAM prices going up and all.
“And this is today’s technology. Mac is moving towards a full 3D environment; odds our their memory needs will be much greater than window’s needs.”
That mostly effects how much ram you need on your display card. Look at games. You can already run full 3D games in 256MB main ram with a 128MB Geforce display card. A 3D UI will probably not be more memory intensive than 3D games.
“The jump from 1 gig to 64 gigs is 6 doubles or about 9 years. For high end desktops (about 2 gigs today) you only have 5 doubles or about 7 years.”
That’s a long time in computer years. 7 years is forever. By that time, 64-bit CPUs will be available in vast quantities and then we can talk about a 64-bit Mac. Going 64-bit right now is too abrupt. Not to mention expensive. The Itanium 2 CPU alone is what $2K? $3K? A 32-bit x86 is the perfect transition CPU since the PowerPC is pretty much end of line. It’s the most sensible direction.
“I don’t know how old you are; but the people who thought the 640k limit was no big deal made almost exactly the same argument. When 64k was very comfortable and 128k excessive the idea of going over a 1meg seemed insane. But the people who noticed that home PCs had gone from 4k to 128k very quickly were worried.”
The 640KB limit was no big deal. I was there. I wasn’t worried then and I’m not worried now. It worked out just fine. No reason to think it won’t again.
“This line I didn’t understand. What does it mean?”
Read this -> http://burks.brighton.ac.uk/burks/foldoc/92/104.htm . It’s not an in depth article, but hopefully it’s enough for you to understand. But the idea is the greater the N, the more expensive (more comparators) and the more efficient the cache.
Once again, that is a bad analogy. The Netherland government is not actively recruiting me and I’m not interested in being their citizen. Plus you’re comparing computer cpus to real laws? That’s ludicrous. Would you compare the destruction of old computers to euthanasia for the old? It’s not the same.
That is of course not true. Both issues are about policies. There could be a poll about Apple’s environmental policies and there could be a vote on environmental regulations by the government. And regardless of whether you are being recruited and whether you think the ‘offer’ is feasible for you, you are still a potential immigrant. It’s only a matter of how realistic it is. That is clearly a sliding scale, you cannot point out the moment where a proposition turns from realistic into unrealistic (what if you lived just over the border in the dutch-speaking part of Belgium?).
There are arguments against this comparison that make sense, but I haven’t heard them from you. Things like:
– The goverment is different from an organisation (one is controlled through elections, the other through capitalism and government regulations).
– This is not comparable to an election. A survey is a better comparison. Of course, surveys are usually confined to one country or the results are seperated by country.
– Intelligent people pay attention to elections and serious surveys. A self-selecting, abused and flawed OSNews poll, on the other hand, is just a way to attract flame-baiters. I like this argument best.
“One reason might be that you don’t want to migrate, but want to have us do something you like. The same is probably true here.”
No it’s not. I’m a proud American. There’s nothing you can do to make me immigrate. To immigrate is to lose myself and my identity. Immigration is a life-altering decision. Choosing computer cpus is just economics and convenience.
You didn’t respond to my argument. I said that although you don’t want to migrate, you might want to stop us from doing something. For example, the US might want to tariff foreign products (like steel) in an attempt to rape the world economy. The proper EU response would be to return the favor, but you might be interested in preventing that. That would be a good reason not to let you vote in the EU, isn’t it?
The truth is you can do everything you want and ever need on either the PC or the Mac. They’re equally capable. The only difference is the experience. (i.e. how the OS feels, how long it takes to get something done, how intuitive the interfaces are, etc.)
Does Final Cut Pro exists on the PC? Does Halflife exist on the Mac? Different tools have different strengths. You pick the right tool for the job.
Probably all PC users. We all know Mac hardware is more expensive and inferior. I want the software that’s all.
So you agree that the poll is useless because many voters actually want something else then what is asked, but try to use the poll to further their agenda (move to x86 != move to commodity x86)? Does it not follow logically that the poll would be far more accurate if those fools wouldn’t be allowed to vote?
Power4 and PowerPC can be lumped together because they are similar. It’s not wrong. It’s a generalization. Just like lumping XScale and StrongArm together.
The Power4 is a server-class, super-expensive, ultra-hot CPU without vector processing. The Power4-based PowerPC is a desktop/workstation-class, cheap, fairly efficient CPU with (probably) a altivec-compatible vector unit. How are these CPU’s ever in the same league?
“That is of course not true. Both issues are about policies.”
Well, there is that similarity, but doesn’t mean it’s a good analogy. Like euthanasia compared to destroyed computers. Still a matter of policy. But it’s not the same.
“And regardless of whether you are being recruited and whether you think the ‘offer’ is feasible for you, you are still a potential immigrant.”
No I’m not. What does the word potential mean? So regardless if Cindy Crawford is married or that she is even remotely interested in me, she is a potential wife for me? More lunacy.
“It’s only a matter of how realistic it is. That is clearly a sliding scale, you cannot point out the moment where a proposition turns from realistic into unrealistic (what if you lived just over the border in the dutch-speaking part of Belgium?).”
It still wouldn’t matter. It’s like asking if the decision to kill an ant is analogous to abortions. Sure there are similarities, but the levels of difference are astounding.
“- The goverment is different from an organisation (one is controlled through elections, the other through capitalism and government regulations).”
If I wanted to play devil’s advocate… You vote with your money.
“- This is not comparable to an election. A survey is a better comparison. Of course, surveys are usually confined to one country or the results are seperated by country.”
That says nothing. That only says a survey is a good analogy. The most of important part of the statement, “This is not comparable to an elections.” is not substantiated.
“- Intelligent people pay attention to elections and serious surveys. A self-selecting, abused and flawed OSNews poll, on the other hand, is just a way to attract flame-baiters. I like this argument best.”
That is not an argument. It’s an opinion.
“You didn’t respond to my argument. I said that although you don’t want to migrate, you might want to stop us from doing something. For example, the US might want to tariff foreign products (like steel) in an attempt to rape the world economy. The proper EU response would be to return the favor, but you might be interested in preventing that. That would be a good reason not to let you vote in the EU, isn’t it?”
I didn’t understand your argument. Thanks for clarifying. But this argument makes no sense here. You and I live in different countries so our economic interests might be different as you say, etc. But we’re both customers of Apple so we belong in the same body. It doesn’t matter that I only use Mac once in a while. If I pay for the right to use it, I have the right to decide what the future should hold. So I like PCs more right now, so what.
“Does Final Cut Pro exists on the PC? Does Halflife exist on the Mac? Different tools have different strengths. You pick the right tool for the job.”
There are clone alike programs that do the same thing. You can do what you do with Final Cut Pro on the PC…just with another program. The same for vice versa.
“So you agree that the poll is useless because many voters actually want something else then what is asked, but try to use the poll to further their agenda (move to x86 != move to commodity x86)? Does it not follow logically that the poll would be far more accurate if those fools wouldn’t be allowed to vote?”
No. It’s exactly what is asked. The poll asks what cpu should Apple switch to. It doesn’t ask what cpu should Apple switch to given that Apple only supports the current user without care of future PC converts.
“The Power4 is a server-class, super-expensive, ultra-hot CPU without vector processing. The Power4-based PowerPC is a desktop/workstation-class, cheap, fairly efficient CPU with (probably) a altivec-compatible vector unit. How are these CPU’s ever in the same league?”
How are Hammer, Xeon, and P4s in the same league? How come they’re lumped togather in the x86 category? They share a common architecture of course. You wouldn’t lump an Arm with a MIPs, but it’s perfectly correct to generalize Power4 and PowerPC together.
I’m going to conceed the 80286 point. I have vague memories of the issue and your counter argument does seem to make sense. However since you are old enough you watched the 8 bit CP/M systems fall to the 16 bit systems fall to the 32 bit systems; I really have trouble understanding why you don’t think the 32s are soon going to fall to the 64s.
Jeff:”I bought a laptop 20 months ago with a 1/2 gig of ram. I’m upgrading my wife’s powerMac to 1.5 gigs and she isn’t a power user by any means. If I were going to buy a desktop today there is no way I’d have under 2 gigs of ram in it.”
The average today is only 256MB. If what you say is true, then you’re not the average user and cannot be counted into the most category. I do Java/C/C++ development and I don’t have any problems in 256MB of ram. (I mean even with the bloated Forte running as my primary IDE.) I’ve built Linux kernels, gcc 3.2, etc. No problems. It’s not expensive to bump my machine to 512MB or even higher, but I don’t need it. Most people don’t either.
Compilers tend to be fairly effecient. Heck I did a linux kernel on a 166mmx with 64 megs of ram (laptop so a slow drive additionally) and it wasn’t that terrible. It would be boated IDE’s for example that would be the killer. As for not being the average user; I’m not sure. I think I tend to multi task more heavily than average; OTOH I don’t run anything really intensive like real time video editing or 3D modeling.
Jeff: “Just pulling my process list:
i.e. 70 megs
outlook 22 megs
acrobat 17.5 megs
word (no documents loaded) 17.1 megs
3 terminals 7 megs each
about 15 services 4-6 megs each
I”m at 200 megs of ram and I’m not even doing anything with my system its virtually idle.”
Still a long way from 1GB not to mention 64GB. Can you even afford 64GB of ram? A stick of 1GB is $200. 36GB is $7200. LOL. It will be a while before the average Joe could even afford 1GB with RAM prices going up and all.
First off my point was that were I at say 64 megs even relatively idle my setup would have been having problems which would overwhelmed any CPU differences. At 128 it might have been a tossup; but with 256 the CPU really becomes the issue.
As for buying 64 gigs now; yes I can afford it but PCs really can’t take advantage of it. If I need that kind of memory I’m much more likely to be on a Sun than a PC. On the other hand I have every expectation that I’ll be buying 64+ gig system before the decade is out.
As for the average joe and $200 for ram; remember that Apple target customer is wealthier than what the gray box PC makers are going for. The difference between spending $200 and $75 for ram is really nothing.
“And this is today’s technology. Mac is moving towards a full 3D environment; odds our their memory needs will be much greater than window’s needs.”
That mostly effects how much ram you need on your display card. Look at games. You can already run full 3D games in 256MB main ram with a 128MB Geforce display card. A 3D UI will probably not be more memory intensive than 3D games.
Here I think you are dead wrong. 3D games don’t run on 4 megs systems as long as they have 128 megs of video ram. You need huge quantities of ram for paging stuff to and from the video card. 3D modeling software requires a great deal of ram; there is no reason to believe a 3D desktop won’t as well.
“The jump from 1 gig to 64 gigs is 6 doubles or about 9 years. For high end desktops (about 2 gigs today) you only have 5 doubles or about 7 years.”
That’s a long time in computer years. 7 years is forever. By that time, 64-bit CPUs will be available in vast quantities and then we can talk about a 64-bit Mac. Going 64-bit right now is too abrupt. Not to mention expensive. The Itanium 2 CPU alone is what $2K? $3K? A 32-bit x86 is the perfect transition CPU since the PowerPC is pretty much end of line. It’s the most sensible direction.
OK so the main issue is price. Anyway on the price issue the Itanium… I’ll use Intel’s price list as my source (http://www.intel.com/intel/finance/pricelist/). Intel charges about $3k for a 2M cache even on the Xeon. For the same chip with 256k cache the price drops to $400. The Itanium 2 with 1.5m is $2200 and with 3m is $4200. The cache is what is driving up the price of the chip; bring the cache down to say 256k and you can could have the Itanium2 today at the under $500 level. A year or two from now we’d be looking at a 2ghz Itanium 2 with 512k cache for under $500 which would be an excellent chip for the powermac and even a possibility for the high end iMac. Further I could even see Intel being willing to lose money to get the Apple/Mac contract and selling the chips below cost. The pressure on Dell/HP/Microsoft… would be intensive as soon as Apple announced the move to 64 bit systems. Further, having the Itanium be the chip on Apple knocks Motorolla out of the game. Intel has already shown it is willing to lose money to kill off compitition. While I can’t bank on that it could make the change even easier and the price difference between the x86 and the Itanium2 (with the same amount of cache) under $100.
So the money argument fails; it doesn’t make sense. Apple has an inexpensive chip for low end systems (the G3/G4); where they are lacking is a single high end chip. BTW quads don’t work because Apple is a major player in the laptop market and the powerbook is getting much too slow.