When it comes to Apple Computer’s new Mac Mini, beauty is in the eye of the person holding the wallet, says C|Net. My Take: I updated my blog with an… unrealistic hope for an even cheaper Mac Mini.
When it comes to Apple Computer’s new Mac Mini, beauty is in the eye of the person holding the wallet, says C|Net. My Take: I updated my blog with an… unrealistic hope for an even cheaper Mac Mini.
In an *in depth* article on OS X, being used on a machine easily four times as powerful as the Mac Mini he does indeed indicate that the OS X UI is not as snappy as he would like, or the alternatives. Cherry picking opinions to suit your prejudice is rather dishonest.
He indicats not as snappy as the latest windows box. Well on OS X you can always turn of font smoothing and get a snappier yet uglier UI. Turn on windows buffer compression disable menu animation and effects.
if you were really smart you would have spent time figuring out how make things faster than cribbing about OS X.
I posted the link to the entire article and not just bits and pieces. But failing to mention the full results of an experiment you did is most certainly dishonest (like frames skipping).
I’d like to comment on the thing about apps being CPU bound. You guys seem to be mixing and matching the terms ‘bound’ and ‘bottleneck’. Just trying to sort through the confusion.
You are correct. There is a difference between a system being CPU bound and an App being cpu bound. That is why I corrected Rayiner.
A system is CPU bound if the CPU is the bottleneck. All apps by definition are cpu bound.
A fact drsmithy doesn’t comprehend and tried to called me technically inept.
Despite multiple attempts to let him gain some technical credibility he severly disappoints. This is one of the many attempts and we have been through more in previous threads.
All apps by definition are cpu bound.
Meaning a faster cpu will most certainly make a difference in preformance. Note that Apps are different from processes and thread in an OS.
Oracle, apache, word, Half Life etc are are Apps. Apps like apache use multiple processes or threads to achieve thier goals. A thread may be I/O bound or CPU bound.
For example, TPC scores for most databases always improve with faster cpus. look at the Power5 based systems scrores vs Power 4 based systems scores.
CPU and memory are interconnected ofcourse you need sufficient memory for the application in question. But we are not talking about system bottlenecks
“But we are not talking about system bottlenecks”
That’s what I meant. drsmithy sounds like he is.
G4’s never had fast RAM. Sad but true. It’s really not as bad as DDR 333 on a PC though, it handles it much better.
Say what ? PCs handle faster RAM *much* better than the G4s. The G4 is *crippled* by its slow bus and may as well still be using PC150 SDRAM, because it gains practically zero benefit from DDR.
Woah. Back up. I never said it sucked, or that everyone in media production don’t like it. OS X has about a 90% market share in the media industry though.
The impression you are giving is certainly that you think “everyone” hates Windows.
Watch carefully, they fade in and out. And I’m just not seeing the scrolling problem. They only place I get that is on giant pdfs and in Word.
I’m certain they don’t fade in. This is done for UI guideline reasons, since a fading-in menu is nothing more than completely pointless eyecandy that does nothing more than slow the user down. IOW, a fading-in menu adds pointless UI latency (the irony !), whereas a fade-out is also pointless eyecandy, but doesn’t have a negative impact on how quickly the user can do something.
I certainly can’t perceive any fade-in on my iBook, but the fade-out is very obvious.
The easiest place to see the scrolling problems is huge documents in Safari. There’s a tangible (sometimes nearly a full second) delay between hitting space and the text actually scrolling.
It’s handled through QuickTime and Quartz Extreme, mostly on the video card. But so are the scroll bars and menus.
I doubt that. Or, at least, I doubt there’s any “real” handling of it going on. If it were, then menus and such should remain as snappy as things like expose, but they don’t.
This is, if I’m not mistaken, some of the things the next version of OS X is supposed to fix – the lack of general UI acceleration done by the video hardware.
I’d like to comment on the thing about apps being CPU bound. You guys seem to be mixing and matching the terms ‘bound’ and ‘bottleneck’. Just trying to sort through the confusion.
In the context, at least to me, they mean the same thing. A “CPU bound” app is one whose bottleneck is the CPU. Personally, I can’t see any other way to interpret it.
The scheduling policy could easily preempt your task with a higher priority task and the one you care about could be kicked off the cpu affecting performance of said app.
Certainly, but that’s completely irrelevant as to whether or not the app is CPU bound.
Headless iMac is an oxymoron.
Try and think outside the Apple box for a few moments. I meant a machine with the iMac’s specs and no monitor. What the Mac Mini, IMHO, *should* have been.
Wrong again.
Sorry, but it’s right. Many (indeed, these days, most) apps are limited by I/O or memory bandwidth long before they’re limited by CPU speed.
Hold on, you are saying a game is Graphics bound if the cpu is failry fast upwards of 3Ghz.
No, I’m saying the game is video bound if adding a faster CPU doesn’t give a proportionate performance increase.
With most games today, going from a 2Ghz P4 to a 3Ghz P4 gives a negligible difference – mainly to do with accompanying speedups in unrelated, incidental operations – whereas going from some video card to a video card 50% faster will almost certainly give nearly a 50% performance boost.
In very, very general terms, an app is CPU bound if it pegs the CPU at 100% while you’re waiting for it. If it doesn’t do that, then either some other aspect of the system is slowing it down, or it’s simply running as fast as it can.
Kind of what I was saying wasn’t it.
No. You are saying all apps are CPU bound and therefore that only the CPU has a significant impact on how an app performs (and therefore, it follows, that the best way to improve performance is always to add more CPU power).
Let’s say you take your normal workstation load and play a game. The game will most certainly perform better on an unloaded system than in one with your typical load.
Well that’s going to depend on your system load, but if you’ve got a reasonably quick system (say the aforementioned 3Ghz P4) and the typical load is fairly low (say 20%) then, no, the game probably won’t run meaningfully faster (from a statistical perspective) if that load was reduced to 0%. Throw in a faster video card, however, and you’ll get a *much* greater performance improvement.
The game is cpu bound. If it weren’t for a fast enough cpu or a heavy load the graphisc card wouldn’t be stressed as much.
False. The video card and system CPU perform discrete, independent tasks.
Time to drag yourself out of the 70s, AR, mainstream computers have had independent processors onboard to unload tasks from the main CPU for going on twenty years now.
Hold on a second let’s max out the memory to 250 GB and a RAID array of 15KRPM disk and run the database on a 200MHz pentium pro and see how it performs. The cpu will be pinned with disk and network interrupts to do any useful work. Let’s even put it on a single 3 Ghz P$ machine, with a nice emc or IBM TB raid box and see how the cpu is pinned just satisfying I/O interrupts and unable to run a complex SQL quey. You would need a MP box to distribute the interrupts and real workload for useful work to happen.
Stop moving the goalposts. Of course when *other* system bottlenecks are minimised, the CPU can quite feasibly become the limiting factor. However, that’s not what you said – you said the CPU was _always_ the limiting factor. So by your logic the 250GB of RAM and the RAID array don’t do anything, the performance benefit all comes from the faster CPU.
All apps are cpu bound, code executes to make I/O happen if that code is interrupted or starved performance will suffer. Once the cpu is fast enough the I/O bottlenecks will show up.
At which point the apps are *no longer CPU bound*. I’m glad we agree your assertion was wrong.
Simple fact is today, for most tasks, CPU power is in abundance. Today, most applications are _not_ CPU bound.
Note how in every one of your examples the CPU is always a fast one.
Well that’s going to depend on what your definition of “fast” is. The principle remains unchanged, however, not all apps are CPU bound.
A squid server with a 500Mhz Pentium 3, 128MB of RAM and an IDE disk is going to have its performance boosted far, far more by another gig of ram and some SCSI disks than just a 1.4Ghz P3. Indeed, just adding the 1.4Ghz P3 would probably see little to any performance improvement at all.
I never claimed sufficient RAM is not needed or disk through put doesn’t matter.
Yes, you did, by omission.
Next time you mean “all apps are CPU bound, except when they’re bound by some other aspect of system performance” *say that*, because it’s a very, very different statement to “all apps are CPU bound”.
By your logic, “all apps are disk bound” or “all apps are memory bound” are equally valid claims, because as long as you’ve got enough performance in every other aspect of the system, they’re going to be the limiting factor.
No amount of memory, or disk through put will make your system perform is the cpu is not up to the task for your work load.
I never suggested it would. Unlike you, I didn’t make any blanket statements about which aspects of the system limit application performance.
He indicats not as snappy as the latest windows box. Well on OS X you can always turn of font smoothing and get a snappier yet uglier UI.
Trouble is Windows also does font smoothing.
Turn on windows buffer compression disable menu animation and effects.
if you were really smart you would have spent time figuring out how make things faster than cribbing about OS X.
I’ve already tried the tricks. You seem to forget I’ve been using OS X on and off since it was called Rhapsody.
I posted the link to the entire article and not just bits and pieces. But failing to mention the full results of an experiment you did is most certainly dishonest (like frames skipping).
That’s because in the “experiment” I posted about first, there weren’t any frame skipping – all ten files quite happily played on the screen at once. I wasn’t aware I had to start dragging other windows all over the place and resizing things as part of the “experiment”.
A system is CPU bound if the CPU is the bottleneck. All apps by definition are cpu bound.
No matter how often you say it, it won’t become true. You’re either flat out wrong, or the definition of “all apps are CPU bound” is so non-specific as to be absolutely worthless.
A fact drsmithy doesn’t comprehend and tried to called me technically inept.
I’ll tell you what. Why don’t you wander into a meeting room full of systems engineers and administrators and tell them your “all apps are CPU bound” theory. Make sure there’s a good mix of DBA, infrastructure and HPC staff there for maximum effect.
A thread may be I/O bound or CPU bound.
Ah, now we see what’s going on. By using AR’s Definition of an Application(tm) – and carefully not telling anyone what that definition was beforehand – you conveniently allow yourself to move the goalposts afterwards.
Congratulations. You’ve won with the Chewbacca defense.
For example, TPC scores for most databases always improve with faster cpus. look at the Power5 based systems scrores vs Power 4 based systems scores.
Please tell me you don’t honestly believe the only differences in those systems were the CPUs…
“Say what ? PCs handle faster RAM *much* better than the G4s. The G4 is *crippled* by its slow bus and may as well still be using PC150 SDRAM, because it gains practically zero benefit from DDR.”
No no no. Misinterpretation there. I know G4s are crippled by slow ram, but they handle that RAM faster than say a Pentium that uses the same RAM.
” The impression you are giving is certainly that you think “everyone” hates Windows.”
Considering how many people buy it, that really wouldn’t be a sane thing to say at all.
“I’m certain they don’t fade in. This is done for UI guideline reasons, since a fading-in menu is nothing more than completely pointless eyecandy that does nothing more than slow the user down. IOW, a fading-in menu adds pointless UI latency (the irony !), whereas a fade-out is also pointless eyecandy, but doesn’t have a negative impact on how quickly the user can do something.”
I certainly can’t perceive any fade-in on my iBook, but the fade-out is very obvious.”
The fade in is much faster than the fade out, but it’s there. And it’s there so that the menu appearing doesn’t look as harsh.
“The easiest place to see the scrolling problems is huge documents in Safari. There’s a tangible (sometimes nearly a full second) delay between hitting space and the text actually scrolling.”
Really not seeing it. Maybe you could post a link to a page that’s doing that to you.
” I doubt that. Or, at least, I doubt there’s any “real” handling of it going on. If it were, then menus and such should remain as snappy as things like expose, but they don’t.
This is, if I’m not mistaken, some of the things the next version of OS X is supposed to fix – the lack of general UI acceleration done by the video hardware.”
Expose isn’t handles by Quartz Extreme. That’s how it works on old hardware. My old iMac G3 333 certainly doesn’t use Quartz Extreme and Expose works fine on it.
“In the context, at least to me, they mean the same thing. A “CPU bound” app is one whose bottleneck is the CPU. Personally, I can’t see any other way to interpret it.”
Something can require a CPU but not be bottlenecked by it.
“Try and think outside the Apple box for a few moments. I meant a machine with the iMac’s specs and no monitor. What the Mac Mini, IMHO, *should* have been.”
I didn’t say Apple couldn’t release something with those specs, just that it wouldn’t be an iMac. iMacs are all-in-one machines, not almost-all-in-one. The Mac mini isn’t a headless iMac. It’s a low end, low speced Mac.
” Trouble is Windows also does font smoothing.”
That’s off by default. Windows has quite a few things turned off be default.
Sorry, but it’s right. Many (indeed, these days, most) apps are limited by I/O or memory bandwidth long before they’re limited by CPU speed.
Apps are not limited only by memory bandwidth. SPEC benchmarks sit wholy in cache and are not memory bandwidth bound. With iTaniums having 6MB and 9MB cache, memory is the last bottle neck for SPEC benchmarks.
Also stop confusing system bottlenecks to how Apps work in a system.
False. The video card and system CPU perform discrete, independent tasks.
How does the video card get data to perform it’s task? Becuase some instructions exectue on the cpu to put it there.
I am done trying to explain to you “how stuff works” in Operating Systems.Video cards don’t just get work out of thin air. Code excutes, in Apps, drivers and libraries to make things happen and code excutes on CPUs first.
The cpu and gpu are perform descrete tasks only if the CPU does it’s task first. On some of today’s systems the CPU has the memory controller built in and also has to perofrm the cache coherency protocols.
The GPU is dependant on the CPU.
Time to drag yourself out of the 70s, AR, mainstream computers have had independent processors onboard to unload tasks from the main CPU for going on twenty years now.
It’s time you went to college and took a class in Operating Systems.
In very, very general terms, an app is CPU bound if it pegs the CPU at 100% while you’re waiting for it. If it doesn’t do that, then either some other aspect of the system is slowing it down, or it’s simply running as fast as it can.
This is wrong on so many levels I don’t know where to start. A slower cpu will be pegged at 100% executing a particular App while a faster one won’t becuase it will do more work faster. If an App os pegging any cpu for 100% all the time for the same duration of time regardless, it is most definetly in an endless loop.
I’ll tell you what. Why don’t you wander into a meeting room full of systems engineers and administrators and tell them your “all apps are CPU bound” theory. Make sure there’s a good mix of DBA, infrastructure and HPC staff there for maximum effect.
In a groups of enigneers I would use the appropriate terms. Like threads, scheduling class latancies and interript distribution policies and so on.
I would use the term “A system is CPU bound’ or “System is memory bound” to exlapin bottlenecks.
If a CPU is getting constantly Pinned by interrupts because the customers I/O is too heavy for tge system, I would most certainly add another CPU. becuase more work is not being done and he CPU is only handling interrupts from the I/O devices. Be it a 2Ghz or a 3.4Ghz CPU. I won’t tell them to add more memory.
However, All Apps execute on cpus and are thus cpu bound first. If the cpu isn’t fast enough then nothing else matters.
You are conflating “How apps work” with “what system bottlenecks are”.
Also Rayiner made a statement in a particluar context. The context of OSes not having impact on CPU bound apps. I said they did. He accepted my explaination and didn’t say anything. You on the other hand took that statement out of context, misunderstood it and called me techincally inept.
In doing so you showed me that your understanding of “How OSes work” is deeply flawed. This whole exercise was very revealing and now I know to take your OS X theories with a BAG OF SALT.
Here is a article comparing the Athlon64 FX vs the pentium 4 extreme.
Since according to you all Games are video card bound. Can you explain why the Fps numbers are different between cpus on systems using the same graphics card. They are all 3Ghz plus cpus.
http://www.digit-life.com/articles2/roundupmobo/pentium4-32ghz-ee.h…
Here is one with the same pentium 4 extreme vs the older pentium 4 again according to your theory there should be absolutely no difference in fps numbers at all. Since the video card is the bottleneck.
http://www.xbitlabs.com/articles/cpu/display/p4xe-346_11.html
or this one where a Overclocked P4 extreme performs differently on the same benchmarks as the non overclocked P4 extreme. There should be aboslutely no affect on performance with a 400 Mhz change in cpu clock speed according to you right.
UT2003 is a game and the cpu being over 3 Ghz should have no perfomance impact on it. Yet two exact systems the only difference being a 400 Mhz speed bump to one of the same model cpus results in a performance difference.
http://www.pcstats.com/articleview.cfm?articleid=808&page=4
I have proved with the above examples that all else being equal a gain in cpu performance impacts Apps including games. I am not debating the magniute of perfomance gains different components effect to a paricluar apps.
System bottlenecks only come into play if an App isn’t performing optimally. However in the case of the overclocked system the App was performing optimally and a change in cpu frequency made a difference.
One more thing if cpus didn;t make a difference why do review sites post Game benchmarks everytime a new cpu is reviewed. Stop making a furher fool of your self. I am not debating performance tuning a system, I know how that works.
I am debating with you because you took a sentence out of context and challenged my technical capabilities. I just wanted to see if you knew enough of anything for me to feel offended. I know now that you are ignorant and you said what you said out of it.
Like I said you always fold like a house of cards when technically challenged. Raynier is a technicallt sound person and he most certainly would have put up a fight if I had posted technically incorrect information, that is his nature. He didn’t so I assume he validated my statement in context of his post.
No no no. Misinterpretation there. I know G4s are crippled by slow ram, but they handle that RAM faster than say a Pentium that uses the same RAM.
You’re going to have to expand on that one, because not only can’t I fathom what you’re really trying to say, but there’s certainly no evidence to suggest that it’s true.
Even the *slowest* Pentium (M) today has a 400Mhz bus. The G4 is still stuck at 167Mhz (or 133Mhz for the iBooks). I really don’t know what you’re trying to say with “handles the RAM faster”.
This smacks of handwaving and hearsay – like that “superior components” thing that gets trotted out all the time.
Expose isn’t handles by Quartz Extreme. That’s how it works on old hardware. My old iMac G3 333 certainly doesn’t use Quartz Extreme and Expose works fine on it.
AFAIK Expose uses the video card if it can. That’s what I was talking about.
If it doesn’t use any sort of hardware acceleration, well, that just makes the other aspects of the UI’s performance even more deplorable.
Something can require a CPU but not be bottlenecked by it.
Which makes for a completely pointless line of reasoning. One may as well say “all apps are laws-of-physics-bound”.
I didn’t say Apple couldn’t release something with those specs, just that it wouldn’t be an iMac. iMacs are all-in-one machines, not almost-all-in-one. The Mac mini isn’t a headless iMac. It’s a low end, low speced Mac.
“Headless iMac” is a shortened way of say “a machine with the hardware specifications of the iMac, but without the bundled monitor”.
That’s off by default. Windows has quite a few things turned off be default.
Turning it on hardly brings the system to a crawl. Indeed, there’s no perceivable performance impact as far as I can tell. Also, what else does it have “turned off” ?
Apps are not limited only by memory bandwidth.
I never suggested they were. I said they *could* be limited by memory bandwidth.
You’re the one who insists on talking in absolutes, not me. Stop projecting.
SPEC benchmarks sit wholy in cache and are not memory bandwidth bound. With iTaniums having 6MB and 9MB cache, memory is the last bottle neck for SPEC benchmarks.
“All apps” != SPEC benchmarks.
How does the video card get data to perform it’s task? Becuase some instructions exectue on the cpu to put it there.
Right. So your logic is basically “computers can’t run without CPUs, therefore all apps are CPU bound” ?
Remind me why you were making this pointless statement in the context of *system performance* again ? You may as well have said “all apps are memory bound” or “all apps are electricity bound”.
This is wrong on so many levels I don’t know where to start. A slower cpu will be pegged at 100% executing a particular App while a faster one won’t becuase it will do more work faster.
No, the faster one won’t because – as I said – either the limiting factor is elsewhere in the system or the app is running as fast as it possibly can.
If a CPU is getting constantly Pinned by interrupts because the customers I/O is too heavy for tge system, I would most certainly add another CPU. becuase more work is not being done and he CPU is only handling interrupts from the I/O devices. Be it a 2Ghz or a 3.4Ghz CPU. I won’t tell them to add more memory.
Trouble is according to you, you’ll *always* recommend a faster CPU to improve performance, because according to you “all apps are CPU bound”.
However, All Apps execute on cpus and are thus cpu bound first. If the cpu isn’t fast enough then nothing else matters.
And if the CPU *is* fast enough ?
You are conflating “How apps work” with “what system bottlenecks are”.
No, I’m making statements that are actually meaningful and contribute something to the discussion, rather than comments whose insightfulness is roughly on par with “computers need electricity”.
Also Rayiner made a statement in a particluar context. The context of OSes not having impact on CPU bound apps. I said they did. He accepted my explaination and didn’t say anything. You on the other hand took that statement out of context, misunderstood it and called me techincally inept.
Rayiner probably has less patience than me.
Rayiner was talking about performance. His statement was “the OS has almost no impact on the performance on CPU bound apps […]” – which is basically true – once an app is executing, the OS overheads have practically zero impact on its performance.
Your reply was “all apps are cpu bound aren’t they? where else would you schedule them?”. Note that the context of Rayiner’s statement – and the comment it is replying to – is *performance* and the impacts of verious system attributes on it. It’s not “well, if we didn’t have a computer at all, we couldn’t very well run any apps, could we ?”.
Similarly with your list of benchmarks you are taking my comments out of context and purposefully misinterpreting them. At no stage did I claim all games are video card bound. At no stage did I suggest additional CPU power would never produce an improvement. At no stage did I suggest an app couldn’t be CPU bound.
The only thing I have said – and continue to say – is that the CPU is not the only limiting factor in application performance and, hence, your blanket statement that “all apps are CPU bound” was incorrect, when that statement is interpreted in any meaningful sense.
Trouble is according to you, you’ll *always* recommend a faster CPU to improve performance, because according to you “all apps are CPU bound”.
Because you are an being obtuse by taking a post out of context.
No, I’m making statements that are actually meaningful and contribute something to the discussion, rather than comments whose insightfulness is roughly on par with “computers need electricity”.
Really you contribute meaningful things to the discussion. Please let me laugh.
Trouble is according to you, you’ll *always* recommend a faster CPU to improve performance, because according to you “all apps are CPU bound”.
Did you notice I said add another CPU?
Rayiner probably has less patience than me.
Not really… He has been active in debates where he thought he was right a lot longer than you.
Rayiner was talking about performance.
No he was talking about an OSes impact on performance.
His statement was “the OS has almost no impact on the performance on CPU bound apps […]” – which is basically true – once an app is executing, the OS overheads have practically zero impact on its performance.
That is the most absurd explanation I have ever heard.
Every thread has a time quantum from the schedluers perspective. Let’s say the most common one is 3 clock ticks before it is switched out for the next thread. If a clock interrupt fires every 100 millseconds the time taken for a thread from an App to execute is 300ms + the overhead of the clock interrupt handler and schedluer.
If the next thread is not from the same App then the till the thread from the App you care about won’t be scheduled till is has a high enough scheduling priority. While this is just the basic scheduling overhead. If an interrupt from say a graphics card, disk and network arrive simultaneously they will be handled using interrupt priorities and then the App threads will be scheduled. Atleast on most modern multitasking OSes this is the case.
The OS overhead is not zero unless you are running on a cooperative multitasking os with interrupts disables and the App in question never yields. Or you run the task as a real time task with interrupts disabled.
Your reply was “all apps are cpu bound aren’t they? where else would you schedule them?”. Note that the context of Rayiner’s statement – and the comment it is replying to – is *performance* and the impacts of verious system attributes on it. It’s not “well, if we didn’t have a computer at all, we couldn’t very well run any apps, could we ?”.
Raynier posted a definition from wikipedia to defend his statement which I pointed out was incorrect in a moderm multitaking OS.
What is your point?
Similarly with your list of benchmarks you are taking my comments out of context and purposefully misinterpreting them. At no stage did I claim all games are video card bound. At no stage did I suggest additional CPU power would never produce an improvement. At no stage did I suggest an app couldn’t be CPU bound.
You certainly said adding a faster cpu won’t make much of a difference. I said it would because the App would be able to perform better.
Here are some statistics.
In all 3D games we use for testing purposes Intel processors yield to AMD solutions. However, the advantages of the contemporary AMD platform in gaming applications are indisputable. As for the results of Pentium 4 XE 3.46GHz, this CPU appears about 1-3% faster than the predecessor, Pentium 4 XE 3.4GHz, and about 5-10% faster than Pentium 4 560 (the only exception is Quake3 game where Pentium 4 XE 3.46GHz manages to perform 19% better than Pentium 4 560).
……….
Overclocking yields an extra 10 percentage points in the same component of Unreal Tournament 2003.
With most games today, going from a 2Ghz P4 to a 3Ghz P4 gives a negligible difference – mainly to do with accompanying speedups in unrelated, incidental operations – whereas going from some video card to a video card 50% faster will almost certainly give nearly a 50% performance boost.
Note the emphasis on the word negligible.
In the statistics I posted above. The are talkin of a 10% gain on average with just 400-600MHz on the same architecture speed bumps and 19% in certain games.
I would hardly call 10% a negligible performance gain.
Here is a video card review.
Note carefully the conclusions drawn. It is obvious that getting the next fastest Radeon has less of and effect on Game performance than a 400MHz bump in cpu frequency.
http://www.xbitlabs.com/articles/video/display/ati-radeon-x850_10.h…
You shouldn’t take the results of the two first resolutions seriously as they are obviously limited by the performance of the system’s central processor. This level of this limit differs between ATI’s and NVIDIA’s cards, which is indicative of the difference in their OpenGL drivers. The graphics subsystem becomes the bottleneck in 1600×1200 only, and the GeForce 6800 GT, the RADEON X800 XT and the newcomer, the X850 XT Platinum Edition, have almost the same speeds there.
………
With enabled full-screen antialiasing and anisotropic filtering we have a different picture – ATI’s graphics cards win the resolutions above 1024×768, but they have always been good in such hard operational modes! There’s a difference of 10fps between the two top RADEONs, although it looks negligible against the absolute speeds of about 120-130fps.
Supposedly you can get a mini for free
http://www.FreeMiniMacs.com/?r=14091424
You have to sign up for a sponsor like blockbuster but then you can cancel after you sign up.
It may be worth it, maybe not. I got a modem this way once.
Re: AR (IP: —.Sun.COM)
Did you notice I said add another CPU?
I did. Semantics.
No he was talking about an OSes impact on performance.
1: (Anonymous)
“As for the rest of it, my 867 Mhz repeatedly out-performed my self-built and optimized 2.8 Ghz P4 while using Windows 2000 (note: no special fx). As the old saying goes: “What Intel hath giveth, Microsoft taketh away.”
Upon switching to Linux, the 2.8 Ghz P4 finally out-performed the TiBook. Microsoft knowingly inserts a half-second delay when opening menus, etc. Its a registry key, check it out.”
2: (Rayiner)
“The OS has almost no impact on the performance on CPU bound apps, so Windows vs Linux shouldn’t have any effect on those benchmarks.”
3: (AR)
“Care to clarify that statement. All apps are cpu bound aren’t they? where else would you schedule them? ”
I fail to see what leap gets the context from general performance (the first two quotes) to “you need a CPU to schedule tasks, therefore without one you can’t run anything” (what it has since become apparent you meant).
That is the most absurd explanation I have ever heard.
Every thread has a time quantum from the schedluers perspective. Let’s say the most common one is 3 clock ticks before it is switched out for the next thread. If a clock interrupt fires every 100 millseconds the time taken for a thread from an App to execute is 300ms + the overhead of the clock interrupt handler and schedluer.
Yes, and the point Rayiner was trying to make is that overhead, on any remotely modern OS and computer, is insignificant (or at least should be). Ie: running exactly the same code in the same conditions on two different OSes, should give (basically) identical results. Whilst it’s been a while since I’ve actually tried it, I believe SPEC supports this theory.
If the next thread is not from the same App then the till the thread from the App you care about won’t be scheduled till is has a high enough scheduling priority. While this is just the basic scheduling overhead. If an interrupt from say a graphics card, disk and network arrive simultaneously they will be handled using interrupt priorities and then the App threads will be scheduled. Atleast on most modern multitasking OSes this is the case.
That’s not OS overhead “slowing down” the app, that’s another thread being scheduled over the top of it, stopping that app from executing at all. *Completely* different thing.
Raynier posted a definition from wikipedia to defend his statement which I pointed out was incorrect in a moderm multitaking OS.
The definition is quite valid. You are arguing semantics.
“CPU bound refers to a condition where the time to complete a computation is determined principally by the speed of the central processor and main memory.[/i]
There are many situations where “the time to complete a computation” is *not* determined “principally” by the speed of the CPU. I have posted some examples. Admittedly, however, I posted some in error based on an outdated knowledge of current game performance.
I would hardly call 10% a negligible performance gain.
I would. You wouldn’t be able to perceive it without some sort of benchmark.
Certainly 20% isn’t negligible. But, again, I never said a faster CPU would *never* give any improvement.
Note carefully the conclusions drawn. It is obvious that getting the next fastest Radeon has less of and effect on Game performance than a 400MHz bump in cpu frequency.
Possibly this is true now. It certainly wasn’t a couple of years ago, when faster CPUs gave much smaller (if any) benefit compared to faster graphics cards. Clearly the pendulum has swung back the other way. I’m sorry I don’t keep up with the cutting edge of video game benchmarking.
Although…
“The graphics subsystem becomes the bottleneck in 1600×1200 only, […]”
According to those graphs, at 1600×1200 the performance difference between a “Radeon X850 XT PE” and a “Radeon X700 XT” is over 100%. I see you’re cherrypicking again.
The principle of my argument, remains unchanged – many apps gain far more benefit from increasing other aspects of system performance in preference to more CPU power. Apps are not only cpu bound.