Intel’s first quad-core processor ‘Kentsfield’ has found its way into the Tom’s Hardware test lab. Weeks before Intel will provide evaluation processors to the press, Tom’s Hardware was able to obtain a qualification sample: The chip was sent through the test parcours, showing impressive performance.
So the real results won’t be release until tomorrow.
Big deal.
It would be interesting to see the results of a becnchmark between a TWIN Dual Core CPU System and the 4 Core system.
This way we better evaluate the real performqance of these beasts.
If I were in Intel Marketing, I would NOT be pushing these CPU’s at Desktop vendors. If My suspicions are correct there is going to have to be a lost of Motherboard resdesign to get the best out of these in many application scenaros.
At the moment the Server end of the market is ideal for these chips.
I do wonder about the whole intel architecture and if it going to be a bit shackled by the associated chipset and its limitation when compared to the AMD Hyperchannel architecture.
I get the impression that the AMD way of getting data to/from the CPU has just a little bit more scalability than the Intel Way.
IF you can’t keep the CPU’s busy then the overall performance gain will be somewhat limited.
Just my 0.2ariary worth
Yes, we have arrived at the beginning of the quad-core era. Currently only servers and server applications can fully benefit from a quad-core, but this could change as application developers embrace more thread-friendly programming models. Java supports this, not so sure about C#. Doing this in C/C++ is painstaking in any but the more simple algorithms.
But wouldn’t it be nice to rip CD collections in 4X time? Same with DVD’s. Same with VMWare.
With four decent cores, a RAID controller, and a really fast bus you almost have a mainframe on your desktop. There’s a whole new set of processing you can do.
I personally don’t see this as an Intel Vs. AMD issue. I see this as a “What can I do with 3 more processors” issue.
Correct me if I am wrong, but wouldn’t there also be gains seen in multitasking even with single threaded applications? (eg. seperate application = seperate thread)
Also, many of the most CPU intensive applications already tend to (at least partly) support multithreading.
BeOS with its symetric multi processing saw an almost 100% increase in performance everytime you doubled the amount of processors, and the apps did not need to be multithreaded in the same way as in conventional os:es as the kernel handeled it. So in short in beos you had roughtly 1cpu=100% 2cpu=195% 4cpu=385% performance. BeOS was not aimed at server markets at all (i might add that the beos was based on a Be microkernel and thus had other problems with the performance in total but the increase in performance shows that it can be done easily withouth having to multithread every application as long as the os itself handles it properly). So saying that only the server/workstation benefits from more cpu:s today is kinda proposterous.
and the apps did not need to be multithreaded in the same way as in conventional os:es as the kernel handeled it.
Incorrect I’m afraid. BeOS forced developpers to write multi-threaded applications, unlike the other major operating systems.
Which meant BeOS was more responsive in general, and more adapted to multi-core by default, but also that development was harder and that there were less applications available.
And it wasn’t a microkernel.
BeOS apps had to be multithreaded in the same way as apps on any other *NIX-like kernel. The OS didn’t handle anything. All BeOS did was to put each window in a seperate thread. That wasn’t a throughput issue at all, but rather a latency one. The GUI processing isn’t intensive enough to need more than a single CPU; multithreading the GUI just fixes the problem of servicing GUI requests promptly. BeOS’s responsiveness could’ve been achieved (more easily in fact), by simply building the GUI toolkit on select() instead. Both would fix the responsiveness under load issue, and neither would make your app run any faster.
As for BeOS’s scalability, it wasn’t there. BeOS’s interactive scheduling was very good, but the scheduler choked after only a few hundred threads. Even NT4 at the time could handle large number of threads more gracefully, though its interactive heuristics weren’t nearly as good.
Using synchronous event handlers within a multi-window single-threaded message loop would not provide equivalent responsiveness to the same in a multi-threaded set of message loops on a multiprocessor system. The BLooper class implements an event handler in a manner not dissimilar to one implemented in terms of select, so this whole “more easily” stuff is nonsense. The architecture of an event loop does not vary significantly across platforms. When people complain because their Gtk+ program stops redrawing the UI because an event handler is blocking, they’re experiencing what a simplistic single-threaded select-based event model degrades to. Whether throughput would be affected is largely dependent upon the nature of the event handlers called by each BWindow. That is to say “it depends.” Further an event loop does not do anything to alter responsiveness under load–that’s a scheduling matter.
The BeOS’s threads were expensive especially in construction, and the one window = one thread idea was probably a lot more useful for marketing than development: creating a crapton of kernel threads is hardly a scalability win.
First off the original poster (judgen) had little knowledge of BeOS. It was not a microkernel, furthermore the OS itself could not “magically” thread apps that had not been mutltithreaded by the programmer.
Those deltas in perfomance by adding multiple processors were wishful thinking. No matter how many processors you have in the system, if the application has not been threaded or simply has very little instruction parallelism.. you will not see that much of a perfomance increase no matter how many processors and fancy OSs you throw at it.
It’s kinda hard to get a good benchmark result that really shows the gain in useing dual core. 9 of 10 benchmark programs will show a faster single core will outscore a dual (and I assume quad) core. This is to be expected
ware dual+ cores start to shine is multitasking cpu heavy apps. Things like encoding a movie while playing the latest game without a noticeable performance loss.
the apps did not need to be multithreaded in the same way as in conventional os:es as the kernel handeled it
The way I heard it was that BeOS forced developers to use multithreading.
Also, I really doubt that the OS multithreaded your algorithms for you, you had to do that yourself.
One hopes/pleads/begs for some outfit like OSNews to publish a big table relating CPUs, cores, clock speeds, bit width, caches, etc. to the myriad of chips coming from Intel and AMD.
Can’t make heads or tails of either company’s propaganda.
TIA,
Chris
These quadcores will be interesting chips for MS [small] servers. Microsoft counts physical processors for its licensing, thereby (for example) one quadcore CPU needs only single processor license for SQL server… Not bad performance-vise
At my work I’m fully satisfied with dualcore AMD, currently no need for quadcores (unless I needed to do some vide conversion in background). More and faster HDDs would be sufficient
Edited 2006-09-10 21:29
mmmm….quad core processor….*ughhghuughghhg*
The results already seem to be posted today, Sept 10.
http://www.tomshardware.com/2006/09/10/four_cores_on_the_rampage/
Anyhow, it’s interesting that in comparison to the dual-core system, that only for the few tests that really utilize extensive parallelism do you see a two-fold increase in performance. For example it only took the quad-core cpu 39 seconds as opposed to 80 seconds to render a frame in 3D Studio Max. Of course, there are many other factors going on there as well (cpu clocks, architecture specifics), but with the many benchmarks, it seems fairly clear that software that utilizes such parallelism will see the increase.
The conclusion talks about a 2-cpu wall for game performance, but I bet game software designers will begin to allow for better parallelism across more CPU’s.
Take Intel’s fastest dual core chips, with lots of L2 cache, overclock them, and put twice as many cores per chip. Install faster RAM, make the FSB run faster and place on the latest motherboard. Attach fast video card, and fast 10k rpm sata drives.
All of this should get 100% speed increase correct? In most of the benchmarks it only gets you 30-50% over AMD fastest chip in most of the bench marks http://www.tomshardware.com/2006/09/10/four_cores_on_the_rampage/
Of course the older AMD system is happy with an older mother board, slower ram, read cheaper. And wait for 6 months to a year when AMD releases it quad core chips that will have features that will make intel look slow while intel’s solution shares one front side bus for everything on the system. AMD will have multiple BUSes, RAM and IO, have seperate busses with each core adding more cabability thus will scale as expected.
Edited 2006-09-11 00:52
Looking solely at those specific “real world benchmarks” (e.g., single-threaded office applications), the Pentium III boxes that are being tossed into dumpsters are almost as good as as AMD’s FX-6x-powered machines.
When you look at the multi-threaded CPU tests, however, Kentsfield delivers three to seven times the performance of AMD’s best processors for about a 10% price difference.
I think that a number of the current benchmarks are going to be made slightly redundant by the arrival of multi(>2) core processors.
I wonder if this is why the results don’t show the expected performance gains when using 4 cores?
so, where is the TPC-C ( or other) type benchmark for these systems. They sure keep the CPU busy.
Tom’s Hardware is using real programs as benchmarks, which is how things should be done. If real desktop programs can’t use 4 cores, then they can’t and customers need to know that. (Of course, I expect other results will show that real server apps can effectively use lots of cores.) Special multi-core optimized benchmarks just mislead customers (but they’ll probably benefit the chip vendors).
BeOS was not a microkernel that is true but kinda similar, but i wouldnt call it a monolithic or a hybrid in the windows nt style either. The kernel itself handled the different servers, like app-server, inputserver, media server and so on thus atleast was higly modular. Also i must add that ofcourse you had to do multithreading in the apps, but not as extensively (atleast it was easier in my oppinoin) as in other operatin systems of the day (1999). As for hte earlier comment that beos would choke in a large amount of threads even before nt4 would, i would like to see some facts for this statement.
I suppose the greatest resource the BeOS provided for the development of concurrent software was mostly not attracting the archaic Unix developer that was afraid of multithreading, and didn’t see any point in it anyway because he only had the one processor and that was good enough for him and when it wasn’t he had fork. The development tools did little to ease the development of concurent programs over other operating systems. The kits released by Be as part of the operating system were more designed with multithreading in mind while the plethora of libraries and toolkits available on other operating systems were largely not thread-safe and when they were this was usually accomplished with coarse locking. Insofar as trying to use many X toolkits with multiple UI threads would result in hilarity you could say that it was “easier.” It was marginally easier to use threads naively in the BeOS because threads were created automatically for BWindow instances, but the challenge of concurrent software development isn’t really in the machinery necessary to instantiate threads and this approach was probably suboptimal. At least it was much less mentally-deficient than Java’s original policy of creating a ridiculous number of threads to handle I/O, since you weren’t likely to create a lot of BWindows anyway.