AMD will drive its Hammer family of 64-bit processors into the mobile market in the second half of 2003, a year or so after it makes its debut in servers, the company revealed at its analysts confab yesterday. However, the company also revealed their full roadmap with details about their desktop add the ClawHammer CPUs.
- The chip maker extended its mobile roadmap into 2003 at the conference, schedule Mobile Hammer’s arrival during the second half of 2003. The model numbers, derived from AMD’s new ‘more than megahertz’ nomenclature will be 3800 and 3600. These parts will draw 35W and 25W, respectively. More information and full roadmap table here.
- AMD’s drive for desktop dominance will see the company ship what it will undoubtedly claim is the equivalent of a 3.4GHz Intel processor, when it ships its first desktop 0.13 micron Hammer chip in the fourth quarter of next year. More information and full roadmap table here.
- Clawhammer, the member of the Hammer family which is intended for uni-processer and dual-CPU servers, is still expected to ship late in the second half of 2002, but AMD’s roadmap suggests that that will take place right at the very end of the year. More information and full roadmap table here.
I hope that AMD gives up on the silly Intel P4 (alleged) frequency-equivalency chip numbering scheme by the time that the 64-bit processors arrive. People that read sites like OSNews can always go to one of the hardware testing sites and get the true story, but it does not contribute to the education of the general IT consuming public as far as I can tell.
I think it will be very interesting to see what happens when 64 bit chips become the standard for computers. 32 bit systems are already quite impressive. I wonder what the advantages, in terms of capability, 64 bit chips will bring…
I’ve said this before… I think a 64 bit os will bring new insights into OS development. For e.g. being able to map the entire disk into the kernel address would make writing an FS driver quite an interesting exercise. You can’t afford the address space in a 32 bit processor to do this with current disk driver sizes. Same goes for physical memory. It is common for memory sizes of typical machines to exceed the capacity of some kernels, mainly because those kernels use some of the kernel address space to directly map the physical memory for performance reasons. If you want to retain that kind of feature in a 32 bit OS, you have to categorize memory pages into ones which you want to be able to map directly, and ones which you don’t care. It complicates the VM design somewhat when you do that. There are already complications with DMA and stuff on legacy hardware (the 16Meg limit), but I believe that issue has been resolved in recent times.
32 bit architectures require that you use windowing techniques to access very large physical sructures. A 64 bit architecture would obviate that need for some time. Anyone like to postulate how long it would take to exhaust a 64 bit address space? Are there applications around that already exhaust this?
P
I have to say that I cant see an obvious need for 64bit just yet
some 32bit processors have page extension to extend thier address
ranges to 36 bit and current processors seem fast enough to me.
Maybe for heavyweight database work, but surely all this is done on
big iron anyway??
Try do some digital recording (audio or video) and machines with large RAM become kinda useful.
Also remember that some OS’s limit RAM per process to about fraction of the addressible space (NT for e.g. gives on 2Gig to the user address space). You can get extra memory, but you have to page switch it much like the good old days of the Apple II or like EMS. Not very nice having to program like that, believe me.
There are also other reasons where a 64 bit processor might be handy, and that’s where you need a native 64 bit type. With the x86, you can still do this using the FPU, but it’s still a bit clunky. A good example of where 64 bit data types would be useful would be in most OS software where you want to handle 64 bit file sizes (e.g. NTFS). It is not uncommon to have disk drives > 4 Gigs now, and while most typical files are under 4 gig in size, your file system still needs to be handle files up to 2^64 bytes in size. In PetrOS for e.g. all file system calls are 64 bit capable, as are system timer calls.
I think the arrival of 64 bit processors is just about at the right time.
I do think the barrier between big iron and modern micros is slowly fading, and will be even more so when the 64 bot micors hit the market.
And of course don’t forget the marketing hype and testosterone contests that’s going to surround the 64 bit processors )))
P
Back in the 80’s, the CDC Cyber 180’s NOS/VE had a 64 bit virtual memory address space,
and mapped everything in the same address space. I liked it. Not that we would want
NOS/VE on the new 64 bit chips, but every time I hear 64 bit address register, I think single
address space.
That is certainly something worth considering. It would be a boon to any OS subsystem that needed to interact with multiple processes at once. A typical example might be a GUI subsystem. In some OSes, the GUI subsystem has to be aware of each processes view of the address space and take appropriate measures. With a single address space, GUI structures in user address space would not need to be copied in & out of kernel address space or mapped in & out of the current VM.
The downside though is that applications would either have to be written using position independent code or you would need to relocate them at some considerable cost. And of course you would need to protect apps from each other.
P
very intersting…
I see a web search on “single address space” will turn up a small but apparently thriving research
community with a handful of operating systems to play with. Mailing list and everything.
I don’t understand the PIC & relocation issue, but even if it’s true, we would expect that for most
applications written for 64 bit platforms it would just be a compiler switch, right?
File mapping is also more fun with 64 bits, because the space will tend to be segmented and each
file maps into a segment, where it has room to extend and not collide with memory ranges
allocated for other things.
The references are interesting to say the least.
With position independent code, it’s fairly processor dependent. I believe this was part of the design reasoning behind the original x86 segment registers, but programs in real life rapidly exceeded the 16 bit memory size and so pointers ended up being 16:16 word pairs.
Generally, your idea would work for a self contained program which was reminiscent of stuff in the earlier days of computing. In the pre-microprocessor days, I remember there being debates about whether a segmented architecture was better than a fixed page size, demand paged architecture. At the time, there were considerable constraints on memory size so swapping tended to be a frequent operation, and there were benefits sometimes from having a segmented architecture. We have moved on now to accept that fixed size demand paged architectures are now the norm, and we also can add more memory than we can poke a stick at. We also have distributed models where it would be possible for the address space to be divided up across multiple machines.
re: PIC & relocation
With the advent of dynamic linking (.dll & .so files), there is always going to be some patching of the executable to wire the module references together. However if the OS was smart enough, it could patch all EXE/DLL’s on disk so that every EXE/DLL file instance had it’s own sub allocation of the machine’s address space. It could even be related to the file mapping issue I refer to further on. A program module will always load faster if it doesn’t need relocation. Perhaps compiler options to allocate a processor unique address range for a given program module might also be a useful feature.
re: file mapping. If your whole disk subsystem was mapped directly into memory, a file mapping could simply be an offset directly within the disk if the file were contiguous. Practically, it might however be better to do a file mapping in the usual sort of way. When you think about it, a file mapping is really an abstraction of the disk’s structure.
For a database which again applies a structured abstraction onto a file, one could apply similar techniques to creating a file mapping to that of sub structures within the database. (i.e. a table could be viewed as a contiguous virtual pseudo file regardless of its actual organization within the physical database file). The user layer could have an abstract virtual pager that could structure an arbitrary slice of memory regardless of the layout of its pieces within the main database file. This could mirror exactly the same techniques that the OS uses for managing a file mapping into memory.
Ultimately however, there will be upper limits to how large a file mapping window would be, the 64 bit address space is big but not infinite. I guess a reasonable guess would be to arbitrarily choose a maximum size for a storage device, and make the default file mapping window the same size as the largest disk mapping window. Or the system could determine it dynamically by what devices are attached to the machine at any present time, or which device the file resides on.
Like I said originally, having a relatively inexhaustible address space to play will likely bring some revolutionary changes in the way OS’s work, and in the way applications do their thing. It will blur the distinction between file objects and memory objects, and we should start to think of having OS’s that manage persistent data in more transparent ways. Why think in terms of files & memory when your entire machine address space can be thought of as your entire machine universe.
This is why it is high time that we moved on from OS’s like Unix that were designed and built for 70’s technology. We are now in the 21st century and we need to seriously start thinking a little more outside the square. I think the new 64 bit technology that is widely accessible will have a large impact on this new era of computing. I also believe it will be fundamentally important for advances in AI for us to break with the traditional file system metaphor.
P
To clarify, by “segment” I meant only that the virtual address range
would be arbitrarily partitioned; a single process might have active
memory pools mapped into several segments, as needed to support independently
grow-able ranges. Purely a VM policy issue, not a hardware issue – AIX
does something like it on 32 bit PPC processors. Like NOS/VE, its
segments are fixed size, but it sounds like some of the research systems
are more flexible.