Release Candidate 1 (RC1) of Windows XP Professional x64 Edition is now available (despite MS shelving its Itanium version): ZDNet has put Build 1289 through its paces.
Release Candidate 1 (RC1) of Windows XP Professional x64 Edition is now available (despite MS shelving its Itanium version): ZDNet has put Build 1289 through its paces.
I was wondering if it’s possible to build a processor 128 Bit and if so why does we still in 64 bit processors… well, with a processor in 128 more addressable space memory, games would be more fast, etc.
64 Bit today, 128, 512, then 1024 Bit processors (the future):)
I installed XP x64 on my AMD64 system last week. It installed and runs just fine. I have rather generic hardware, so I had no problems with missing drivers that others reported.
The biggest issue is that there’s very little software available that’s been ported to 64 bits. XP x64 comes with a 64-bit version of IE (and I thing Outlook Express and Media Player), but that’s about it. Commercial 64-bit software is also almost nonexistent.
Since I wanted a completely 64-bit system, I replaced XP with Gentoo Linux for the AMD64. The entire system consists of 64-bit binaries. In my opinion, Linux is a better choice for a 64-bit system right now than Windows.
“I was wondering if it’s possible to build a processor 128 Bit and if so why does we still in 64 bit processors… well, with a processor in 128 more addressable space memory, games would be more fast, etc.”
Just how many terabytes of memory do you think are required so “games would be more fast”?
“64 Bit today, 128, 512, then 1024 Bit processors (the future):)”
64-bit a decade or more ago. “128-bit” a decade or more from now. Probably much more.
“The biggest issue is that there’s very little software available that’s been ported to 64 bits.”
Frankly, most software doesn’t need to be. Not until x86-64 is the default architecture.
I’m typing this on a 64-bit UNIX system and most of it’s binaries are not compiled -64 nor is most of the commercial software available for it built that way. Only where it makes sense.
Frankly, most software doesn’t need to be. Not until x86-64 is the default architecture
True, but it is nice to have them when possible. On my Gentoo system all of my binaries are 64-bit. Are they faster or better merely by being 64-bit? Probably not, but I like having a fully 64-bit system.
We will never need 128 bit address space! You can address with 128 bit 2^128 (approx. 340*10^36) bytes (or 4 billions * 4 billions * 4 billions 4GB)! We will never have such big memory chips. It’s physically impossible. I even don’t think we will need such big virtual memory.
An example: If you save 1 bit into a hydrogen atom and you put them together closely (not possible), you need a cube of approx. 47*10^15 m or 5 light yeahrs each side!
I meant light years
There are already 256bit proessors. Look at http://www.transmeta.com/efficeon/efficeon_tm8800.html.
“We will never need 128 bit address space!”
A lot of people have made the mistake of making such comments in the past.
“640K ought to be enough for anybody.” — Attributed to Bill Gates, 1981, but believed to be an urban legend.
http://rinkworks.com/said/predictions.shtml
Sorry, wrong calculation of the cube: you need a cube of just 700 meters each side.
Sorry
I don’t think those processor have a 256 bit memory address space. In case they have, you will never get that much memory.
The human race dies before we need that much memory per computer! Sorry, it’s not going to happen!
How much RAM does it use to load? The screenshot they show of task manager shows 39 processes running, my SP1 system upgraded to SP2 runs 15 and even that could probably be reduced.
Is there a performance improvement over previous 64bit builds?
How much disk space is the total install compared to a 32bit install of XP SP2?
Post a BootVis benchmark of that build and a 32 bit build on the same system.
This is why techies quit going to mainstream sites like zdnet and cnet for product information.
“True, but it is nice to have them when possible. On my Gentoo system all of my binaries are 64-bit. Are they faster or better merely by being 64-bit? Probably not, but I like having a fully 64-bit system.”
Well, as long as you aren’t claiming practical benefits from a 64-bit “ls”…
“I don’t think those processor have a 256 bit memory address space. In case they have, you will never get that much memory.”
They don’t. They aren’t “256-bit” processors, but 32-bit processors. They just have a 256-bit wide instruction path. That’s something different entirely.
“A lot of people have made the mistake of making such comments in the past.”
It’s going to be a VERY long time before we need address spaces that large.
@AU: Well perhaps we don’t need a 128 Bit address space, the possibility to calculate with 128 Bit+ could be praktikal.
“@AU: Well perhaps we don’t need a 128 Bit address space, the possibility to calculate with 128 Bit+ could be praktikal.”
We can do that now.
“We can do that now.”
I know. I meant to have 128 Bit (or more… 256, 512, what ever) wide generall purpos registers.
“We will never need 128 bit address space!”
You do realize almost everybody who has said we will never need x amount of some computer hardware type we ended up developing and eventually needing it.
Take FAT for instance. We won’t need more then 2GB of hard drive space for home users, it doesn’t make sense to make a file table that can handle more. Heck there will probably never be hard drives that big! I’m sure thats what the creators of FAT first thought.
I’m sure when computers were first thought up nobody ever imagined the need for a hard drive or anything similar. Good old removable storage will always be enough.
The point I’m trying to make here is NEVER say never unless of course your talking about how often you should say never.
I know it does not have 256bit addressing. The idea behind is called VLIW and the point is to increase ipc. It is was mainly targeted at ultra portables and long life margets. Since they have high ipc and low power usage they were going to make something like an 8way 1U server. It would be very cost effective IIRC. Since the northbridge is on the cpu and it would be in a server eviroment I would imagine each cpu would need to address a minumum of 4gig, but in hand helds and laptops 4 would be plenty. Just check the link and google.
I can think of some uses for 128bit of address space. Maybe you want to have all computers on the planet to share one address space. When a computer connects to the internet his memory is paged into the global address space.
For something like this you would need 128bit as 64bit would not be enough. This might be a bit far-fetched, but certainly not impossible.
“I can think of some uses for 128bit of address space. Maybe you want to have all computers on the planet to share one address space. When a computer connects to the internet his memory is paged into the global address space. ”
Why would anyone want to do that?
Why would anybody want to map files into memory when there are nice APIs for reading and writing files sequentially? 🙂
This is just wild speculation, but you could basically eliminate all network and file IO APIs. Objects behave identically wether they are in memory, on disk or on the other side of the world. The address of an object in the global address space would automatically be a GUID.
You would need a very intelligent caching mechanism to cope with large latencies though.
I agree, it’s dangerous to say “never”.
But I don’t think the creators chose 2GB because they thought they will never need more space. They chose 2GB due to economic reasons. (performance/size/costs ratio)
Furthermore, we are talking about “amounts” which we just can’t handle. It’s not enough to invent the technology (which is not possible with present science) and just build the hardware. You must create programs to use that hardware and collect data for processing. These programs and data comes from humans and I can’t imagine that we are smart enough to handle it.
Yet another example: Imagine you use the frequency of gamma rays (highest known frequency, afiak, much higher than light frequency) of 10^19 Hz to transfer the data to your computer. Then you need 8*10^12 years to transfer 2^128 bytes to your computer (theoretically, without overhead etc.)
“This is just wild speculation, but you could basically eliminate all network and file IO APIs. Objects behave identically wether they are in memory, on disk or on the other side of the world. The address of an object in the global address space would automatically be a GUID.
You would need a very intelligent caching mechanism to cope with large latencies though.”
Again, why would you want to do this? Just to get rid of “networking”? Frankly, you’d just replace it with something else (another API) which would perform the same task, just with a likely reduction in data security.
My experience is the more detailed visual and audio information becomes, the more we appreciate and observe in our environment. This translates into an ever expanding need for more data processing and memory requirements. Although on a side note, I’m still waiting for 24/96 DVD Audio to appear in the mainstream. Stuff I-Pod and the likes, I want my detailed audio.
Yeah, it would be faster, not because of the 64 bits, but because it has plenty more registers and can read more memory in one cycle. Of course it won’t make much difference for ls or disk access.
“Yeah, it would be faster, not because of the 64 bits, but because it has plenty more registers and can read more memory in one cycle. Of course it won’t make much difference for ls or disk access.”
THAT’s the advantage of x86-64, not its 64-bitness.
Before claiming ‘we’ll never need a 128 bit address space’ is a dangerous remark, please notice it works exponentially, so the leap from 64 (or even 96) to 128 is magnitudes greater than that of 1 to 32. For instance, Amdahl’s law claims a twofold increase in CPU speed in 3 year time; it should take 192 years to get from 64 to 128 if need for AS increased at the same pace.
“You do realize almost everybody who has said we will never need x amount of some computer hardware type we ended up developing and eventually needing it.”
128 bits is going to be the practical limit for a long long long time. An engineer at Sun (his account is in Sun’s blogs) calculated that filling their 128-bit ZFS would require so much energy that it would boil all the Earth’s oceans. Literally. 128 bits is 2^32^64 times the address space of 32 bits–this is some astronomic-scale stuff, here.
Here is the blog I referred to (Jeff Bonwick): http://blogs.sun.com/roller/page/bonwick/20040925#128_bit_storage_a…
His conclusion: “Thus, fully populating a 128-bit storage pool would, literally, require more energy than boiling the oceans.”
here comes the BSOD
“I can think of some uses for 128bit of address space. Maybe you want to have all computers on the planet to share one address space. When a computer connects to the internet his memory is paged into the global address space. ”
128bit addresses will allow for a few million or so per square foot of the earth’s surface. Quite sufficient for IPv6 I think — especially when your toaster and your fridge have IPs.
I think AU’s comments regarding relating to 128 bit’s where fair, though I personally would have choosen 256 for this argument, as thats pretty close to the number of atoms in the universe.
Yes, we all know that its silly to say you will never need X amount of somethingrather in the future, but 2**256 is a really big number [1].
[1]: 1157920892373161954235709850086879078532699846656405640394575840079131 29639936
For references sake according to http://www.google.com.au/search?q=cache:4VnclNG7zdgJ:www.sunspot.no… the number of atoms in the universe is about:
>>> int(1e79)
9999999999999999673560075006595519222746403606649979913266024618633003 221909504
while yes, we may never fill things like 128 bit file systems and so forth, i think you have to look at them more about when 64 bit stuff gets maxed. Thats where anything CPUs and so forth are needed. Since we probably will manage to max 64 bit stuff.
Oh, and you all seam to forget a very powerful force, Marketing. They will drive a lot of this stuff.
Today’s 64-bit CPUs don’t actually address 64-bits of physical or virtual memory. Itanium can address 50-bits of physical memory and 60-bits of virtual memory (1024 TB and 1024000 TB, respectively). AMD’s Opteron can address 40-bits of physical address space and 48-bits of virtual address space (1024 GB and 256 TB, respectively. EM64T-enabled Xeons and P4s can address 36-bits of memory address space.
Even 64-bit ISAs have quite a bit of leeway left as to how much system memory they can access in a straight-forward manner.
That’s the point! We don’t need the whole 64 bit for addressing right now and we don’t need 128 address space for the foreseen future.
But for calculations it makes some sense, specially if you work with encryption (integer) or chaos (float).
Probably, we will see machines with 128 (or 256) bit general registers for calculations and 64 bit for addressing.
Have Firefox released any release which is 64bit and aimed at the upcoming Windows x64 release?
We will probably be needing huge amounts of memory and wide address space, but not in personal computers. Computing may be totally different in ten or twenty years, and PCs may not exist in that future.
Anyway back to the subject :p
Im with Spotted Owl lack of 64 bit software is a problem.
There is a Firefox Build but I couldnt download it for some reason.
Big problem aswell is that even though my machine was okay for drivers, other needed 64 bit stuff for day to day work is missing at the moment – main one being flash – so browsing in 64bit ie is quite annoying.
Another Issue for me is that I run nearly all non game software through Virtual PC, that didnt work so it had to go so Im back on 32bit.
Finally another issue is 64bit codecs, xvid I think is nearly there but the others…….
“Have Firefox released any release which is 64bit and aimed at the upcoming Windows x64 release?”
There is build of mozilla firefox for it. You can find it here http://www.mozilla-x86-64.com/. But it is quite unstable and I couldn’t to run it at all.
an use to 128bit addressing would be to map every atom of our body to make de teleporting a reality.
Ooops… there’s a parity bit or ECC on this new memory modules ? No ? We’ll have troubles…
sorry my BAD english… it’s not my native language.
Seems very rough… I guess this isn’t a priority at the moment. How much of the architecture of the program has to be remade in order for it to become 64bit? Is it just replacing some small pieces or is it actually rewriting the entire codebase? It would be nice to slowly migrate to 64 bit computing, but without apps it’s really no point =(.
here comes the BSOD
For those who still run 16 bit apps…<shakes head> Its about damn time they dump upport for them.
Although I really dislike Linux as an operating system and the culture surrounding it, I must admit that it really makes sense right now in the 64bit arena.
Hope competition comes around soon with BSD and I hope that Haiku considers going 64bit as well =)
Creative, ATI, NVidia, VIA all seem to be working on 64 bit drivers for XP 64.
My question is about printer drivers. Does XP 64 require new printer drivers. If I recall, Windows just needs an INF file for each printer since the acutal “drivers” are supplied by MS. Is this still the case, or will HP, Canon, Lexmark need to make new drivers? I’ve read posts from people on other boards that they’re unable to print in XP 64.
I have an HP 5L. On the link below from thei site, o drivers for XP 64 are available.
http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownload.jsp…
..well they don’t really have choice about making 64bit drivers if they want customers to buy their crap^H^H^H^H products. printers are gonna need 64bit inf-files..just kidding, if it’s only the .inf i presume they’ll work. otherwise not
It’s not about whether you can index 2^128 of address space, its about more logical organisation of the address space, and use of content indexing, and so on.
For example, IPv4 address space of 2^32 was largely limited by class A/B/C/etc partitioning. IPv6 address space of 128 bits may seem large, but the important thing is that there are plenty of 48/64 bit prefixes to help top level routing and partitioning.
In another case: say you have a process with 4GB address space, you can actually exhaust this address space quickly by creating thousands of threads, each of which has 2M stack space: because although you never use 2M of stack in each thread, the OS will reserve 2M of virtual address space – and 2048 threads later, you’ve exhausted your process address space.
So, the point is, that 128 bit processors and, in general, large “addressing schemes” are not always about the totality of all permutations of the address space, but about efficiencies in partitioning and indexing.
The real question is: what applications, architectures, and other issues would _drive_ the need for larger than 64bit addressing schemes.
he didnt try firefox or he would know there were problems with windows xp 64bit, when i used this very build firefox would at random points cause reboots. Also whever i would plug in an external hard drive or mp3 player it would cause it to freeze, to get it to come out of the state of being frozen i would have to take out the hard drive and turn off the power.
“Although I really dislike Linux as an operating system and the culture surrounding it, I must admit that it really makes sense right now in the 64bit arena.
Hope competition comes around soon with BSD and I hope that Haiku considers going 64bit as well =)”
You dislike the culture surrounding it? There are so many persons with different cultures, colors and beliefs contributing to Linux & Open Source Software that there IS no one culture “surrounding it”. And Linux is about choice, including the choice of not to choose it.
Have a read of what Jeff Bonwick had to say about it at http://blogs.sun.com/bonwick/20040925#128_bit_storage_are_you
Makes an interesting read.
‘His conclusion: “Thus, fully populating a 128-bit storage pool would, literally, require more energy than boiling the oceans.”‘
Ah yes with todays technology. But once again with every advancing technology this will some day not even be a factor.
My brother got his new AMD 64bit and tried Windows XP 64bit. Installing it took hours, performance is really poor, for instance moving application window on the screen took minutes compared to usual 32bit OS seconds. Scrolling down with wheel mouse takes forever etc etc. Of course, Windows XP 64bit isn’t ready yet, just beta version and I’m sure it will get better.
But atleast this episode made my brother willing to try Linux
The problem of any dramatic change on Windows platform like 32 bits to 64 bits is the proprietary applications legacy. For a long time people will not buy a x64 CPU because there are no drivers nor 100% 64-bit applications.
If people used linux or *BSDs we could be making 64 bit to 128 bit CPUs now withou problems. If we have 128-bit GPUs why not 128 bit general-purpose CPUs ? Answer: M$ Windows legacy.
You proved my point very well… the culture as in cocky, antiselfcritic, etc etc etc… the Linuxculture….
As opposed to, say, Mac users?
So Microsoft is releasing a 64 bit version of XP. Isn’t that a little late. If we was to beleive Microsoft they will releas the first versions of Longhorn in a year or so. Once it is out it will take a while to say the least before we get 64 bit software. People that need 64 bit performance will probably have solved the problem in other ways already. Wouldn’t it have made more sense to put a little more effort into Longhorn to get that out on time.