i can’t believe i just read through all that… and i really can’t believe how feircly people cling to technology that they don’t even seem to understand. personally i’d be just as happy with a g5 as i am with my amd d.i.y. box. they’re both fine systems with way more power than i need the majority of the time. i just can’t afford the one i don’t have, so my choice is simple.
i personally wish the pc market would go risc and only provide x86 emulation for backwards compatibility. but i’m an ill informed dork who realizes that after years of schooling that i’m only beginning to scratch the surface of the complexity of computer science.
and screw processor speed anyway. triple the speed of the processor and you’re still waiting on I/O. especially if your using an operating system that loves to move the contents of memory back and forth from disk all the damn time.
>it’s not “it would be a non issue”, it is a non issue.
>GPU’s today do not even flood the AGP8x BUS.
Run NVIDIA’s SphereMark.
AGP8X is important for AGP texture insurance i.e. IF the on-board graphics memory were to be overwhelmed. AGP8X bus has issues with returning data from graphics card i.e. slow. AGP8X bus is only a half duplex bus. This issue must be resolved for future GpGPU applications i.e. treating the GPU as a massively paralleled 32bit floating point SIMD co-processor (only 32bit single precision float point atm). Applications such BionicFX (DSP style operations via NVIDIA’s NV30/NV40) points to the future.
>I personally wish the pc market would go risc and only >provide x86 emulation for backwards compatibility (SNIP)
Modern X86 processors (e.g. Pentium Pro (1st P6) and NextGen Nx586) actually translates variable length instructions (CISC) into simpler fix length instructions (one of RISC concepts).
AMD Opteron EE x40 (@1.4Ghz, 130nm) ~30Watts reaches this target. My old 130nm Athlon 64 CG steping @1Ghz 22Watts (1.10Volts X 18 Amps = ~19.8 Watts for core).
Besides http://smc.vnet.net/timings50.html which has Mathematica benchmarks and shows a Dell Precision 650, 4X3.06GHz Xeon, 512KB L2, 4GB, being killed by an AMD xp-2700, 2.17 GHz, 333 FSB, 1 GB, win-xp-pro, does anybody know other other sites that compares different hardware platforms using number crunching scientific software rather than artistic software as benchmarks?
>PC 970FX @ 2 Ghz: 24.5 Watts (SNIP)
Refer to
http://www-03.ibm.com/technology/power/newsletter/august2004/articl…
Refer to “Figure 2. Maximum power envelope from 0.8 to 1.3 V showing the power reduction possible through power-tuning methods.”
@2Ghz, Max power is at 40Watts.
One shouldn’t equate “typical power” with “max power”.
You’ll notice that the people who are bashing the Mac the most have the words “gamer” or “gaming” in the post…
i can’t believe i just read through all that… and i really can’t believe how feircly people cling to technology that they don’t even seem to understand. personally i’d be just as happy with a g5 as i am with my amd d.i.y. box. they’re both fine systems with way more power than i need the majority of the time. i just can’t afford the one i don’t have, so my choice is simple.
i personally wish the pc market would go risc and only provide x86 emulation for backwards compatibility. but i’m an ill informed dork who realizes that after years of schooling that i’m only beginning to scratch the surface of the complexity of computer science.
and screw processor speed anyway. triple the speed of the processor and you’re still waiting on I/O. especially if your using an operating system that loves to move the contents of memory back and forth from disk all the damn time.
>it’s not “it would be a non issue”, it is a non issue.
>GPU’s today do not even flood the AGP8x BUS.
Run NVIDIA’s SphereMark.
AGP8X is important for AGP texture insurance i.e. IF the on-board graphics memory were to be overwhelmed. AGP8X bus has issues with returning data from graphics card i.e. slow. AGP8X bus is only a half duplex bus. This issue must be resolved for future GpGPU applications i.e. treating the GPU as a massively paralleled 32bit floating point SIMD co-processor (only 32bit single precision float point atm). Applications such BionicFX (DSP style operations via NVIDIA’s NV30/NV40) points to the future.
Reference
http://gpgpu.org/
>PCI express’s value is in its being ready for the next >generations of GPU’s and for graphic cards with Multiple >GPU’s.
SLI with games/traditional graphic renders are only one of the applications.
>I personally wish the pc market would go risc and only >provide x86 emulation for backwards compatibility (SNIP)
Modern X86 processors (e.g. Pentium Pro (1st P6) and NextGen Nx586) actually translates variable length instructions (CISC) into simpler fix length instructions (one of RISC concepts).
@ G4 (MPC7447A) (SNIP)
MPC7447A @1420Mhz, Max Power Dissipation: ~30Watts.
Refer to
http://www.freescale.com/webapp/sps/site/taxonomy.jsp?nodeId=018rH3…
AMD Opteron EE x40 (@1.4Ghz, 130nm) ~30Watts reaches this target. My old 130nm Athlon 64 CG steping @1Ghz 22Watts (1.10Volts X 18 Amps = ~19.8 Watts for core).
Refer to
http://www.tomshardware.com/cpu/20041115/pentium4_570-20.html
for real life X86 Watts comparisons. Notice, Athlon 64 3400+ (S754 HTT800 Clawhammer CG) is close to Opteron HE 246’s 55 Watts.
Back to Real work on my G5 duaile.
And not one single scientific application.
Besides http://smc.vnet.net/timings50.html which has Mathematica benchmarks and shows a Dell Precision 650, 4X3.06GHz Xeon, 512KB L2, 4GB, being killed by an AMD xp-2700, 2.17 GHz, 333 FSB, 1 GB, win-xp-pro, does anybody know other other sites that compares different hardware platforms using number crunching scientific software rather than artistic software as benchmarks?
Go here… you’ll see that the G5 is a great number cruncher. http://www.popularmechanics.com/technology/computers/1279211.html