Unlike most areas of the technology business, 64-bit computing has somehow remained immune to the forces of commodity competition. Read the article at News.com by SuSE’s CEO.
Unlike most areas of the technology business, 64-bit computing has somehow remained immune to the forces of commodity competition. Read the article at News.com by SuSE’s CEO.
¨Five years from today, nobody in IT will be buying 32-bit servers (and maybe not even 32-bit laptops).¨
I agree with Mr. Seibt. Once Microsoft releases an AMD64 version of its server OS, I think the market will switch to 64-bit computing in a hurry. Whether this is just a market move and whether the technology really justifies this change, is irrelevant. Why run a 32-bit OS if you can have a 64-bit OS for the same price? Obviously, the same applies to hardware.
Do you need a 64-bit server to run Apache? Nope. But who cares? Given the choice of running a 64-bit version of Apache or a 32-bit version, most people will chose the 64-bit version. I know I would!
Will Linux users benefit from 64-bit CPUs? Again, who cares? It really makes very little difference right now, but five years from now I doubt any Linux user will be purchasing a 32-bit CPU.
It´s quite possible that five years from now, a 32-bit mainboard will be next to impossible to buy, just like it´s impossible to buy a 486 mainboard nowadays.
So what´s 64-bit computing to Linux? Quite simple: a non-issue.
Just because microsoft releases something doesn’t make it a standard, they just released windows 2003 sever and they are hoping that all the NT machine out there will be replaced but who wants to buy new hardware so they can use a new OS when what you have works just fine and money is tight. 64-bit will catch on slowly because replacing those corporate desktops, workstations, servers and mainframes can really hurt the pocket. Not only do you have to buy new OS license you also have to upgrade or replace hardware.
for Linux means we can use tar again for backups. It would now be possible to tar up an 80GB directory into a single file and maybe have the system crunch on it for a day or two to bzip2 it. Can you imagine how long it would take to mv that file to an NFS directory?
So, since many people may agree with me that 64-bit CPUs are long overdue, and how many years do you think it’ll be before we see 128-bit CPUs..?!
Michael Lauzon
Founder & Lead Project Manager
InceptionOS Project
http://www.inceptionos.org/
[email protected]
Well 64bit magnitudes can be used to map a LARGE amount of information, and they offer good enough precission for most scientific computations.
Larger pointers are really not that useful, afterall do you understand what a 2^64 flat address space offers????? So 64bit will be good for a looooooooooooong time. And no this is not one of those 640KB ought to be enough for anyone shortsighted comments. We are talking about exponential orders of memory difference between 32bit vs 64bit (but most 64bit are really 52 or less bits addresseable).
Also 128bit datapaths have terribly large critical paths, hence making them quite slow (i.e. harder to rev up the clock cycle) with respect to 64 or 32bit datapaths.
Besides the larger native pointers, there is really no added value to a 64bit machine. If you need the precission, you could actually get better results with a 32bit machien that is at least 2x as fast. And 32bit machines (or even 16bit) are more efficient area and speed wise. With current production technologies you could have ridiculously fast 8/16bit ALUs… they are however less “marketeable” than brand new shinny 64biy ALUS
“or Linux means we can use tar again for backups. It would now be possible to tar up an 80GB directory into a single file and maybe have the system crunch on it for a day or two to bzip2 it. Can you imagine how long it would take to mv that file to an NFS directory?”
…..Or you could just use a DLT tape that does 2x compression on the fly. In fact you could have been doing that since the mid 90s….
Javi,
‘Also 128bit datapaths have terribly large critical paths, hence making them quite slow (i.e. harder to rev up the clock cycle) with respect to 64 or 32bit datapaths’
You cannot really say that, because how do we know how CPUs will be made in a few years, they may find a new way to do it, and have the CCs while having 128-bit CPUs. I usually aska similiar thing when asking about video game consoles, how long until we see 1-Byte — 1024 bits = 1 Byte, if I am not mistaken — graphics.
Michael Lauzon
Founder & Lead Project Manager
InceptionOS Project
http://www.inceptionos.org/
[email protected]
-nt-
I was extremely tired — (and still am)! when I wrote that — because I’m hungry…at least I knew I’d make a mistake.
8 bits = 1 byte
16 bits = 2 bytes
32 bits = 4 bytes
64 bits = 8 bytes
128 bits = 16 bytes
Etc.
Right. What I should be asking is when is when we’ll get into major KB or MB for console gaming — I have a feeling someone’s going to say we already have that.
Michael Lauzon
Founder & Lead Project Manager
InceptionOS Project
http://www.inceptionos.org/
[email protected]
Linux gave AMD the needed leverage to create Opteron in the first place as they knew at least someone could support it after the sale. It removed their reliance on MS “getting around to it” favoritism with intel and makes MS have to move on support, or loose servers to Linux!!
Opteron really, truly is the first “Linux” CPU! And it’s 64-bit too boot!
” You cannot really say that, because how do we know how CPUs will be made in a few years, they may find a new way to do it, and have the CCs while having 128-bit CPUs….”
Welp, yes I can really say that. If you want to have a true 128 ALU, you will have 1st a very large (understatement) multiplier with 128×128->256 result, with incredibly long signal transmission requirements, and an equally impressive area. The same will also be true about the wider registers needed, and caches too. And of course the number of pins explodes too. When you design a chip you first look at the mix of instructions that current systems are running, and look at possible future trends. If you look at the number of 128bitx128bit multiplies (to keep up with the multiplier example), you can see that their number is significantly smaller than normal 32×32 or 64×64 multiplies. A 128×128 multiplier would be significantly slower at those two precision points, with a significant area disadvantage with respect to native 64×64 units. Where as the same 128×128 will be faster only when dealing with native 128bit data chunks, wich is only a minimual part of current (or even future) int needs.
There was a clear advantage when moving from 32bit to 64bit. There is however not the same incentive to go a step ahead and move to full 128bit ALUs. Once you move from 64bit on up, you start hitting the diminissing returns laws. Could you do it? Sure, but there is no icentive to develop these types of CPUs. CPU design is an art of balance, the fact that you can do it doesn’t mean that you should.
A 64bit machine has enough addressing capabilities to keep up for a long while, while the speed degradation over 32bit machines is not too bad. However 128bit CPUs are not reasonable for the foreseable future, not because of technical barricades on the way (and there are plenty) but rather because they make no economic sense…..
This all applies to general purpose CPUs, the mileage may vary with respect to application specific processors. There it may made sense to have wider units, just because of the application/code requirements….
Forget the bigger fatter CPUs. They’re getting too hard to build for the returns the chipmakers get. The next break-thru chip will be either clockless or analog. They both can use conventional-type manufacturing but until recently have been neglected.
Clockless chips exist for very small circuts right now, but everyone is researching them!(pagers come to mind) They require much more care in building and compiling up front,[timing is everything] but the lack of clock saves upto 50% of the power!
Analog CPUs are still a dream, but the researchers are getting closer. The advangage is that analog circuts can handle things like radio signals or process controls with ease that dwarfs digital circuts. Also, they would present a completely unique way of solving non-computational problems, i.e. “better”, “worse”, “mabye” than conventional CPUs can.
<smile> After that the robots [Who needs people?] will build the sub-atomic based “glowing crystals” that are all the rage in Sci-Fi! </smile>
I’ll upgrade, but I will not be among the first. I’ll wait until I can pick up low cost 64-bit hardware kind of like you can pick up a 1.7 ghz processor right now for peanuts.
I would rather have an object-oriented 32-bit Standard C++ platform implementation than a 64-bit platform that is implemented in C.
Maybe I can find some gullible individual to buy my old Commodore 64 – I’ll just tell them it’s 64 bit and is so advanced it doesn’t need a harddrive or fan.
<cheap_shot>
Actually a digital circuit made of transistors (like our modern CPUs) is actually an analog device!
</cheap_shot>
🙂
Those who do not learn from history… i.e. there is a reason why digital circtuits were developed in the first place!!!!
*sigh*
2^8 = 256 bits
2^16 = 65536 bits
2^32 = 4294967296 bits
2^64 = 18446744073709551616 bits
i don’t think you’ll ever see a mass produced transistor based 128bit CISC cpu. The CPU will be entirely too hot or, it will run at 400MHz…
2x address space = 2xTransistors = 2x power consumption.
There are still aircraft flying today that were designed totally using analog computers.
Hybrid digital/analog computers took over from them in the mid 60’s.
The hydrid computers are still being built and doing serious aero simulation work today…
http://piofree.com/aircraft_ground_test.htm
Thankfully, you don’t have to use patch cords any more.
Do these 64-bit comps still use the directory/table (Possibly even a third higher level) scheme for paging?
Seems that for such a large address space you’re going to have to maintain some very large structures in memory to describe it. Is it a significant amount? I know memory is pretty cheap and all these days so it might not matter as much.
As for analogue vs digital. What about light (I know someone somewhere has been working on this one)?
Let’s say you have a few hundred input frequencies, a sensitive detector and some incredibly clever work with prisms and silicates (All of which is beyond me :>) then you could have a few hundred signals passing through the processor at the same time. Seeing as it’s low powered light the heat problem goes away, the speed problem goes away, and everyone’s happy. Hmm, how to construct a NAND with light beams…nice dream :>
“As for analogue vs digital. What about light (I know someone somewhere has been working on this one)?”
What about it? Light is an analoge entity that can be used to represent digital data. So far optic computers are used to replace transmission lines, but they are still digital in nature.
” Seeing as it’s low powered light the heat problem goes away, the speed problem goes away, and everyone’s happy.”
First you still have to excite those elements to give off photons, i.e. you have to turn “on” the light source. That takes energy, i.e. you have a source for heat production/dissipation.
Second you still have switching elements, and thus there is your speed trap.
Remember your circuit is only as fast as your slowest component in the pipeline, hence you have to be careful where your bottlenecks are.
“I agree with Mr. Seibt. Once Microsoft releases an AMD64 version of its server OS, I think the market will switch to 64-bit computing in a hurry.”
Yeah…and all us admins and support personnel will have to support 16, 32, and 64-bit applications in one way or another. Maybe very few 16’s any more, but there are a lot of non-corporate users who still have 16-bit programs running on 95, 98, and ME that won’t run on anything newer.
Linux folks will just fix and re-compile.