“While it’s common knowledge that Microsoft has been working on Windows XP and Server 2003 for the AMD64 architecture for some-time, little is known about the workings and limitations of this new operating system. We recently got the chance to try out the first publicly released variant of the operating system (Build 3790), and combined with reading through loads of tech docs and talking with folks over at AMD, we’ve comprised a summary of how we think the OS is shaping up, where it’s headed, and we’ll try to answer some of the common questions about the OS in general.” Read the preview article at the GamePC web site.
All is that we need..
Some drivers , a lot of tons of drivers.
Hmm I can see intel saying to him customer that they’re not have suficcient amount of operton or athlon 64 out there to build drivers for his products….
If it wasn’t for AMD, intel would keep desktop pc’s in 32bit land forever. bastards.
An other question: What for do we need 64 bit computing on Desktopsystems?
Because of similar benefits when desktops moved from 16bit to 32bit. Easy…
“An other question: What for do we need 64 bit computing on Desktopsystems?”
– digital audio/midi
– 3d rendering/movie creation
– format translation – avi/mov/divx/etc
– compiling programs
and most importantly – to run everything at once!!!!!!
Yes, similar benefits can be achieved, but for desktops and the market these CPU’s are intended for, it is still early.
Why do we need 32bit computing? Why not? AMD has stated that it is a relatively cheap (die-size wise) move to go from 32 to 64 bits. It’s also the perfect chance to introduce much needed new registers. Intel has taken the elitist path of trying to regulate 64 bit computing to servers only (and at ridiculous prices at that), while AMD takes the view that 64 bit computing isn’t such a big deal and will be including it in everything starting from their mobile chips, and why not handhelds too in the future.
Is 64 bit a huge deal? No. Was MMX, SSE etc. big deals? No, but it was easy enough to add them so what the heck… 64 bits for everyone!
An other question: What for do we need 64 bit computing on Desktopsystems?
Because of similar benefits when desktops moved from 16bit to 32bit. Easy…
Such as exploding RAM and hard disk requirements, and the ever-increasing need for faster CPUs just to maintain past levels of performance? Will 64-bit PCs bring us 16 Gigabyte RAM requirements in the next 3 years or so? Just remember what we were using in 1995. 16MB of RAM for Win95 sounded outrageous.
As I look back, I don’t see increased bits giving much more in benefits compared to the costs.
I don’t think 16EB of ram is needed for the desktop and
workstations. Servers do benifit from it alot and have been using 64bit for years.
Most home users (I guess about 99%) do not have the
full 4GB an there desktop PC/MAC. So it makes it pointless
at the moment.
Intel’s 32Bit chips have PAE mode to address lots more ram
at price of performance.
But with most technology. In 10 years time we will be
fustrated to that terrible 16EB limit and will need
2POWER128 of address space to keeps Windows 2013,
Linux 4.2.66 and MacOS XIII running.
The article says there are no problems with 32Bit software, but actually I doubt it.
The screenshots tell that the build no. is 3790. This is the same as Windows Server 2003. (Windows XP 32-Bit’s build no. is 2600). This means that WinXP64 uses the NT5.2 kernel instead of NT5.1.
Overall I think that’s good, but I guess that there are the same compatibility problems with WinXP64 as with Win2003 – Most software works just fine, but a few programs don’t.
Currently I have the trial version of Win2003 installed on my PC and eg. Dreamweaver refuses to run, GetRight crashes often and I have sometimes problems when recording CDs.
Since I would imagine that most Linux drivers are open source, we probably won’t have to sit around and wait for companies to update their device drivers for 64-bit
On the other hand, how hard is it to ‘upgrade’ drivers from 32 to 64-bit? Is it simply a matter of compiling GCC with different options, or does it take a massive rewrite ?
It seems AMD’s 64-bit chip would be a winner (baring drivers….. BUT Borg thinking will be IMHO:-
Microsoft cannot afford to back WinAMD-64, because of certain retaliation from Intel in the form of more funding to Linux development.
AMD relys on IBM for it chip production, and IBM is funding and supporting Microsoft’s would-be Nemesis, Linux.
Also WinAMD64 will not scale up as well as WinItanium against the hated Sun Microsystems in the high end server battle.
So to attack Sun, IBM and Linux Microsoft must ally with Intel its Itanium2.
Thats why WinXP 64 is shipping for Itanium and WinAMD64 is NOT being shipped.
“Is it simply a matter of compiling GCC with different options, or does it take a massive rewrite ?”
Yes… In the easiest case. But a lot of software is bound (due to lazyness or something else) to datatypes of a certain lenght. So if one datatyp is changing (eg. pointers: 32 bits to 64 bits) this could mess up your whole software. So you (or a software) have at least have to check your software if it is compatible.
“”An other question: What for do we need 64 bit computing on Desktopsystems?”
– digital audio/midi
– 3d rendering/movie creation
– format translation – avi/mov/divx/etc
– compiling programs
and most importantly – to run everything at once!!!!!!”
But to do these operations you seldom need more than 4 gigs of RAM. First you need a fast Harddisk, second a fast CPU. And a CPU isn’t faster if it operates on a higher bit base. Btw. to speed up these types of software you need a vectorengine just like sse.
two major performance benefits are to be had, esp. to the gaming community
1- additional registers, from 16 to 32, which may boost performance by 30% according to Unreal programmers. According to amd, extending intel’s 8general purpose registers to amd’s 16 GPR registers delivers 80-90% of the performance benefit of pure RISC solutions like powerpc, which typically have 40 general purpose registers.
2- flat memory addressing, segmented architecture is a Kludge makes programming more complicated and creates a performance penalty. x86-64 does this away with a clean RISC flat memory addressing.
These are benefits to be had even before the 4GB limit is reached.
for those beos fans, x86-64 is a clean slate, much like be os.
“But to do these operations you seldom need more than 4 gigs of RAM. First you need a fast Harddisk, second a fast CPU. And a CPU isn’t faster if it operates on a higher bit base. Btw. to speed up these types of software you need a vectorengine just like sse.”
sorry for ROTFLMAO!!!, but that’s just plain ignorant. there’s programs like gigasampler which consumes *ALL* (99%) of the resources on the pc it’s running, and therefore is almost universally run on a seperate pc. chuck in a few VSTi synths (reason, plex, reaktor, absynth, etc) while playing back 10-20 audio 96k/24bit tracks while recording 1 or 2 additional tracks and applying real-time directx effects on the audio tracks and the fastest pc with ram maxed just simply crawls. I could easily use a terabyte or ram. really. just simple thing like loading samples into ram instead of reading them off the hard drive would be awesomely useful: the latency would be cut down so it’s next to nothing. truly useful and not one bit frivolous.
video is even more hungry than audio. and btw, rendering a 3d scene in something like world-builder or maya is pretty close to 100% pure math, it has nothing to do with displaying such an image as you imply… although the word is used interchangeably for both definitions. after the scene is rendered (created or compiled if you will) then the vectoring in your graphics card is used to display it.
and finally, a cpu is faster with a high bit path. a 64bit cpu will process 2x the data of a 32bit cpu in one clock cycle.
First of all, the current limit of memory for an application is not 4 GB. This is the amount for both your application and the kernel, and it is usually split 2/2 (1 for kernel and 3 for app can be done in Linux, not sure about Windows).
So in reality your app won’t be able to use more then 2 gigabytes (maybe 3 after tweaking).
With a 64-bit OS and CPU, the actual amount of your memory becomes the limit.
“I could easily use a terabyte or ram.” So than buy the new CPU, but you aren’t the normal user.
Intel is still selling most of its CPUs to OEM, Offices, etc. and they don’t need 64-bit processing. And because they sell the major part of their production to such people they are (in their eyes) the Desktopusers.
Btw. a 64bit cpu will process 2x the data of a 32bit cpu in one clock cycle.
Thats not generally true. If you are working with 64 bit integers than they can be processed at full speed. This does not mean that it can process two 32 bit integers simultaniusly
“So than buy the new CPU, but you aren’t the normal user.
Intel is still selling most of its CPUs to OEM, Offices, etc. and they don’t need 64-bit processing.”
You’re confusing “normal user” with lamer. Care to guess how many people do digital audio/midi, for example? Millions. Just because lamers who only surf the web and write ms-word docs don’t need 64bit is no reason to delay it. Dumbing down to the lowest common denominator is what makes the world stupid and evokes violence. You want to be a speedbump and get in my way and hold me back? I’ll run you off the road, cos stupid people suck.
“This does not mean that it can process two 32 bit integers simultaniusly”
duh. that’s exactly what it means.
you’re an idiot, this conversation is over.
‘Thats why WinXP 64 is shipping for Itanium and WinAMD64 is NOT being shipped.‘
WinXP64 is being worked on, and will be shipped around the end of the year, if not early next year according to the article…and have you actually seen anyone running the version of Windows for the Itanium processor?!?! I think YOU need to read the article again!!!
/dev/null
P.S. Watch them delete my reply again, as they did to one of my other replys about a Linux-basher going on the offensive when a Linux-user decided to bash Windows!!!
<<<<WinXP64 is being worked on, and will be shipped around the end of the year, if not early next year according to the article…>>>>>
Its a pre-beta OS, and from a technology point of view it looks very good. O.K. drivers are an issue, but that could be overcome is MS came out and backed AMD.
The problem is the business strategy angle,
Intel is the one any only comapny that could topple Microsoft or at least cripple it. Bill Gates knows that far better than anyone.
On the WinItanium, HPQ has been shipping the zx-2000 and zx-6000 workstations for the mjaority of 2003. PC Pro (UK based magazine) has a review of the zx-2000 is its Feb. 2003 edition.
In fact HPQ is rolling out the Deerfield version (60 watt power consumption) of the Itanium based zx-2000 worksation this monday. Please see the Reg article posted recently.
I hope AMD64 processors are a major success and AMD get some good market share gains. We all would love to see AMD and Intel duelling it out with 40% share each. Apple, Transmeta and Via can fight for the rest.
If it wasnt for AMD, we would currently be spending $1000 for our brand new Pentium 3 1GHz processors.
“and finally, a cpu is faster with a high bit path. a 64bit cpu will process 2x the data of a 32bit cpu in one clock cycle.”
It is much harder ! The problem with pentium today, when you are doing floating point operations, is the memory bandwidth. For example, the speed’s difference between a algorithm programmed for 32 bits floats and the same programmed for 64 bits float data is largely due to the memory badnwidth which is too slow; remember that internally, all floating point operations are done in 80 bits “registers” ( there are 8 general registers in the pentiuem FPU ), and are put back to 32 bits in the main memory or 64 or 80, depending on the type you are using in you algorithm ( float, or double, or long double in C for “general C compilers” on 32 bits architectures ). So the bootlneck here is the memory bus speed( which is already 64 bits, I think, for Pentium ), not the CPU itself.
Before going to 64 bits, it would be better to have faster memory, and bigger cache. 0.09 micron would be excellent to have bigger cache; just see the difference between a 128 ko and a 256 ko cache for a similar architecture and frequency; 24/96 was much better handled on a PIII than on a celeron.
” I could easily use a terabyte or ram”
That’s not really true. For example, journalists from Sound on Sound say that having more than 1 Gb of memory isn’t really yseful today for audio. And I experienced the same with my computers.