Linked by Thom Holwerda on Wed 20th May 2015 23:38 UTC
Games

While AMD seems to have made up with Slightly Mad Studios, at least if this tweet from Taylor is anything to go by, the company is facing yet another supposedly GameWorks-related struggle with CD Projekt Red's freshly released RPG The Witcher 3. The game makes use of several GameWorks technologies, most notably HBAO+ and HairWorks. The latter, which adds tens of thousands of tessellated hair strands to characters, dramatically decreases frame rate performance on AMD graphics cards, sometimes by as much as 50 percent.

I got bitten by this just the other day. I'm currently enjoying my time with The Witcher III - go out and buy it, it's worth your money - but the first few hours of the game were troubled with lots of stutter and sudden framerate drops. I was stumped, because the drops didn't occur out in the open world, but only when the head of the player - a guy named Geralt - came close to the camera, or was in focus in a cutscene. It didn't make any sense, since I have one of the fancier Radeon R9 270X models, which should handle the game at the highest settings just fine.

It wasn't until a friend said "uh, you've got NVIDIA HairWorks turned off, right?" Turns out, it was set to "Geralt only". Turning it off completely solved all performance problems. It simply hadn't registered with me that this feature is pretty much entirely tied to NVIDIA cards.

While I would prefer all these technologies to be open, the cold and harsh truth is that in this case, they give NVIDIA an edge, and I don't blame them for keeping them closed - we're not talking crucial communication protocols or internet standards, but an API to render hair. I do blame the developers of The Witcher for not warning me about this. Better yet: automatically disable and/or hide NVIDIA-specific options for Radeon owners altogether. It seems like a no-brainer to prevent disgruntled consumers. Not a big deal - but still.

Order by: Score:
v Comment by Licaon_Kter
by Licaon_Kter on Thu 21st May 2015 00:15 UTC
Comment by SpyroRyder
by SpyroRyder on Thu 21st May 2015 00:23 UTC
SpyroRyder
Member since:
2014-08-25

I have a similar opinion. At the end of the day Nvidia provide a feature which is better on their cards than AMD's. This is a feature that ADDS things to the games that use it. If these guys thought that implementing their game with this feature rather than just not including it because AMD cards aren't as good for that then good for them. Yes it does suck a bit from a consumer perspective but I personally would rather have a game that works spectacularly on certain hardware rather than mediocre on everything. Additionally it is an optional feature so it's not like it's always going to affect the performance on AMD cards.

Reply Score: 2

RE: Comment by SpyroRyder
by WereCatf on Thu 21st May 2015 07:11 UTC in reply to "Comment by SpyroRyder"
WereCatf Member since:
2006-02-15

I have a similar opinion. At the end of the day Nvidia provide a feature which is better on their cards than AMD's.


Does it work better on NVIDIA's card because it's just tuned that way, or does it intentionally cripple performance on AMD GPU's? That's what I always keep wondering about, it certainly wouldn't surprise me at all if NVIDIA did intentionally cripple performance way more than necessary.

Reply Score: 6

RE[2]: Comment by SpyroRyder
by hobgoblin on Thu 21st May 2015 12:31 UTC in reply to "RE: Comment by SpyroRyder"
hobgoblin Member since:
2005-07-06

When it comes to the tech world i am tempted to flip around the old adage about incompetence and malice.

Damn it, Intel was caught red handed some years back violating the feature detection spec for CPUs.

Rather than look at the actual register that was supposed to say what x86 extensions a CPU supported, binaries compiled with their compiler would check what was supposed to be a purely descriptive string.

change up that string and the same binary would perform just as well on a AMD or VIA CPU as it did on an actual Intel.

But without said string, the binaries would drop back to a code path that was more suited for 386 CPUs than a modern core.

the whole market is rotten to the core.

Reply Score: 6

RE[3]: Comment by SpyroRyder
by WereCatf on Thu 21st May 2015 13:30 UTC in reply to "RE[2]: Comment by SpyroRyder"
WereCatf Member since:
2006-02-15

Indeed, I do remember that debacle and that is exactly why I can't shake the feeling that NVIDIA is doing a similar thing on purpose. Would be nice if someone could actually prove that they aren't doing such, but I'm going to assume they are until someone more knowledgeable proves the assumption wrong.

Reply Score: 4

RE[4]: Comment by SpyroRyder
by bassbeast on Sun 24th May 2015 15:15 UTC in reply to "RE[3]: Comment by SpyroRyder"
bassbeast Member since:
2007-11-11

What is sad is companies like Intel pull that crap and don't get penalized for it even when they admit they are rigging! And how much does this kind of rigging make a difference? A hell of a lot actually, as the link I'll provide below from Tek Syndicate shows when you use non rigging programs suddenly those AMD cpus that the benches say are so far behind? Are trading blows with Intel chips three times the price!

I'll say the same thing about this as I said when Intel admitting they were rigging, if you win fair and square? Congrats, you deserve all the increased sales and accolades that you have earned. But when they rig the market they need to get busted and be shunned by everyone as rigging the market benefits only the one doing the rigging, it damages the market by hampering competition, causes the consumer to pay higher prices than they should if there was fair market competition, it allows the one doing the rigging both the funds and motivation (due to lack of consequences) to do more nasty moves (see intel killing the Nvidia chipset business for example) and leaves the market as a whole much worse off.

https://www.youtube.com/watch?v=eu8Sekdb-IE&list=TLRt7C-qG9964

This is the test between Intel and AMD, and below where Intel admits rigging benchmarks...

https://www.youtube.com/watch?v=ZRBzqoniRMg&index=16&list=PL662F4191...

Sorry I had to use video links but Tek Syndicate is one of the few places that doesn't accept money or favors from either chip company. For an example just how much Intel money effects reviews look at Tom's Hardware and their "best Gaming CPUs" list where the writer even admits that most new games require quad cores...and then recommends a Pentium Dual over an FX6 that is cheaper! But this is not surprising as you look at their site without ABP and wadda ya know, its wallpapered in Intel ads.

Reply Score: 2

RE[3]: Comment by SpyroRyder
by Carewolf on Thu 21st May 2015 14:07 UTC in reply to "RE[2]: Comment by SpyroRyder"
Carewolf Member since:
2005-09-08

Intel still does that. Anything compiled with the Intel compiler (such as many benchmarks and games) will deliberately cripple any non-Intel CPUs.

Reply Score: 3

RE[4]: Comment by SpyroRyder
by tumdum on Fri 22nd May 2015 15:20 UTC in reply to "RE[3]: Comment by SpyroRyder"
tumdum Member since:
2009-01-31

Can you name one AAA game for Windows which was compiled using icc?

Reply Score: 1

RE[5]: Comment by SpyroRyder
by Carewolf on Sun 24th May 2015 11:29 UTC in reply to "RE[4]: Comment by SpyroRyder"
Carewolf Member since:
2005-09-08

Can you name one AAA game for Windows which was compiled using icc?

AAA usually knows better, but they sometimes end up using precompiled third part libraries that were build with icc, because it produces faster results on Intel chips, and Intel is directly involved in and supports many performence oriented libraries.

It is hard to notice though, since AMD are usualy slower even when not crippled, and it takes a some effort to tell from a disassembled release binary which compiler produced it.

Also as a response to software that removed the Intel-only check on the binaries, Intel has changed the code several times breaking these tools, so that there now is no single easy signature of it, and no automatic way to patch it out.

Reply Score: 2

RE[2]: Comment by SpyroRyder
by Chrispynutt on Thu 21st May 2015 16:38 UTC in reply to "RE: Comment by SpyroRyder"
Chrispynutt Member since:
2012-03-14

I think you are kind of right. Both AMD and Nvidia have their own competing set of closed additions. In this case I guess AMD TressFX. Which uses OpenCL I believe, but OpenCL is generally better performing on AMD. However Nvidia has OpenCL and their own CUDA.

Nvidia is not naturally open, they like to keep everything in their control. AMD likes to play the Open card when they are at a disadvantage. Whether that is their natural state is up for debate.

In fairness AMD also makes up the console releases as well. So I guess no super magic floppy hair for them either.

I have seen Nvidia specifically lock out mixed AMD+Nvidia setups for hardware Physx. That also might be the reason.

For the record folks I have an Nvidia GTX 970 right now, previously AMD 7970 and Nvidia 560ti before that (and so on).

Reply Score: 2

RE[2]: Comment by SpyroRyder
by pfaffa on Fri 22nd May 2015 17:51 UTC in reply to "RE: Comment by SpyroRyder"
pfaffa Member since:
2012-04-24

It apparently works on AMD if you override the game settings in catalyst and turn up one setting.

Reply Score: 1

RE: Comment by SpyroRyder
by reduz on Thu 21st May 2015 13:38 UTC in reply to "Comment by SpyroRyder"
reduz Member since:
2006-02-25

The problem is that NVidia makes GameWorks and they don't give a rat's ass about AMD.

Game companies know that hiring a _really_ good rendering engineer is difficult because you can count them with your fingers, so they go the canned solution and use GameWorks instead.

AMD could counter this by publishing open implementations of the algorithms used by NVidia, but AMD never gives a fuck about anything.

Reply Score: 2

RE[2]: Comment by SpyroRyder
by Alfman on Thu 21st May 2015 14:57 UTC in reply to "RE: Comment by SpyroRyder"
Alfman Member since:
2011-01-28

reduz,

Game companies know that hiring a _really_ good rendering engineer is difficult because you can count them with your fingers, so they go the canned solution and use GameWorks instead.


Counting good engineers with "fingers" is an exaggeration. I was very good at this sort of thing. It's exactly the kind of job I wanted but never managed to land during the tech recession and I got into web development instead (ugh). That's the thing, there's enough talent in this world to develop thousands of rendering engines, but just because we are willing/able to build them doesn't mean the market can sustain it. The word that best characterizes the changes of the past decade is "outsourcing" with the aim of reducing business costs.

I suspect it's probably still the case today that game shops are being overwhelmed by applications, they could likely find multiple candidates who'd be able/eager to build them a new engine, but it's really hard to make a business case for it. Seriously, a company's options are:
1) just outsource this to a platform that's already free, supported, and available today, etc.
2) Increase costs by hiring engineers to design/build it internally, increase time to market, possibly contend with reverse engineering because nvidia decided not to divulge the technical programming information needed.

So it's no surprise most companies just go with canned solutions.

Edited 2015-05-21 15:00 UTC

Reply Score: 2

MacMan
Member since:
2006-11-19

Its brain dead simple to check what graphics processor the system is using in either OpenGL or DirectX.

Seriously, how freaking hard could it have been to create a list of defaults that work for each type of card, would have taken me no more than and hour or so to set up such a system. Then maybe a few days with QA trying it out on different types of machines to establish what these defaults should be.

Its just not that hard guys

Reply Score: 5

This is not like TressFX
by belal1 on Thu 21st May 2015 03:02 UTC
belal1
Member since:
2013-05-25

TressFX has an code that's open for developers which allows them to tinker the code to fit both AMD and Nvidia. Hairworks on the other hand is nvidia only. Not only that, but the performance hit is definitely another area that's of concern for AMD. here's a quote from wccftech.com :

. In fact AMD states that the performance hit of TressFX is identical on both Nvidia and AMD hardware. All the while HairWorks clearly favors Nvidia’s hardware by a significant margin yet it is still slower.
AMD attributes this performance lead to the open nature of the source code, allowing both game developers and Nvidia to optimize it to their needs.

Read more: http://wccftech.com/tressfx-hair-20-detailed-improved-visuals-perfo...



This just goes to show that AMD is and always will be the leader that pushes for innovation for the industry as a whole whereas nvidia is only concerned with it's own brand.

Reply Score: 6

RE: This is not like TressFX
by Licaon_Kter on Thu 21st May 2015 08:23 UTC in reply to "This is not like TressFX"
Licaon_Kter Member since:
2010-03-19

On launch TressFX had the same FPS impact on nVidia cards, yeah they sorted out after a few patches.
Just like 3D Stereo ( HD3D ) on Deus Ex: HR, it took them a few patches to get it working on nVidia, and funny story, after launch the main stereo guy from nVidia, Andrew Fear, was pleading for Eidos/Nixxes on their forum to get in contact so they can help them open up 3D stereo for nVidia too.

Reply Score: 1

Comment by shmerl
by shmerl on Thu 21st May 2015 06:29 UTC
shmerl
Member since:
2010-06-08

It was pretty much known to those who followed development of the game. They spoke about it more than once. Still, they could use portable technologies.

I wonder what Linux version will use, since HairWorks uses DirectCompute and doesn't exist on Linux even for Nvidia cards.

Edited 2015-05-21 06:30 UTC

Reply Score: 3

What's the difference?
by franzrogar on Thu 21st May 2015 07:27 UTC
franzrogar
Member since:
2012-05-17

What's the difference between AMD screams for Witcher 3 performance and NVIDIA screams for Tomb Raider (SquareEnix reset) Lara's hair which clearly stated not to use realistic hair with NVIDIA?

This' truly a hairy problem...

Reply Score: 2

v Comment by birdie
by birdie on Thu 21st May 2015 07:45 UTC
Or tweak your settings
by grahamtriggs on Thu 21st May 2015 08:26 UTC
grahamtriggs
Member since:
2009-05-27

You could argue that it is a case of AMD not optimising their drivers for certain games...

http://wccftech.com/witcher-3-run-hairworks-amd-gpus-crippling-perf...

Reply Score: 1

RE: Or tweak your settings
by Licaon_Kter on Thu 21st May 2015 11:48 UTC in reply to "Or tweak your settings"
Licaon_Kter Member since:
2010-03-19
I don't see it
by kwan_e on Thu 21st May 2015 10:11 UTC
kwan_e
Member since:
2007-02-18

While I would prefer all these technologies to be open, the cold and harsh truth is that in this case, they give NVIDIA an edge, and I don't blame them for keeping them closed - we're not talking crucial communication protocols or internet standards, but an API to render hair.


How does it give them an edge? Having the feature over not having the feature definitely has the edge, but how would being closed or open matter to how much edge it would have?

If a competitor sees value in the feature, they would just implement their own, whether or not nVidia opened their tech. And AMD apparently sees enough value in the feature to have their own counterpart.

Reply Score: 2

not the worst example
by chithanh on Thu 21st May 2015 15:11 UTC
chithanh
Member since:
2006-06-18

The Witcher 3 is actually one of the more mild annoyances, because there you can turn off the problematic features.

Much worse is the situation with Project Cars, which is built entirely around proprietary NVidia technology which can never be made work properly on AMD hardware.

http://www.reddit.com/r/pcmasterrace/comments/367qav/mark_my_word_i...

Reply Score: 2

RE: not the worst example
by Alfman on Thu 21st May 2015 21:11 UTC in reply to "not the worst example"
Alfman Member since:
2011-01-28

chithanh,

Much worse is the situation with Project Cars, which is built entirely around proprietary NVidia technology which can never be made work properly on AMD hardware.


Yea, it would make the most sense to fix these issues inside the GameWorks code. But so long as NVidia refuses to do it, the onus gets shift to the developers who have to re-implement the suboptimal GameWorks implementation for AMD. No matter what way you cut this, the situation of having games dependent on NVidia binary blobs is pretty bad for AMD and it's customers. Here's hoping this gets rectified somehow, because competition really should be based on actual merit rather than with vendor locked software.

NVidia might have the lead anyways, but the binary blobs are still a disservice to consumers in general.

Reply Score: 2