AMD has released a new release candidate of the upcoming WHQL driver (Cat 12.01). This new RC11 brings important performance gains in OpenGL tessellation, especially when high level of tessellation are required. According to the release notes, there is a big boost in TessMark 0.3.0 with insane level (tess factor of 64X):
Performance highlights of the 8.921.2 RC11 AMD Radeon™ HD 7900 driver 8% (up to) performance improvement in Aliens vs. Predator 15% (up to) performance improvement in Battleforge with Anti-Aliasing enabled 3% (up to) performance improvement in Battlefield 3 3% (up to) performance improvement in Crysis 2 6% (up to) performance improvement in Crysis Warhead 10% (up to) performance improvement in F1 2010 5% (up to) performance improvement in Unigine with Anti-Aliasing enabled 250% (up to) performance improvement in TessMark (OpenGL) when set to “insane” levels
I don’t have a HD 7970 yet, only a HD 6970. So let’s see if the performance boost is also visible on a HD 6900. Here is the comparison between Catalyst 11.6 (the last Catalyst that brought important gains in OpenGL tessellation, see HERE) and Catalyst 8.921.2 RC11:
TessMark settings: map set 1, 1920×1080 fullscreen, 60 seconds, no AA, no postfx:
– tess level: moderate (X8) – Gain: -1.6%
|Cat11.6: 44090 points, 735 FPS – SAPPHIRE Radeon HD 6970
|Cat 8.921.2 RC11: 43364 points, 723 FPS – SAPPHIRE Radeon HD 6970
– tess level: normal (X16) – Gain: +0.4%
|Cat11.6: 19398 points, 323 FPS – SAPPHIRE Radeon HD 6970
|Cat 8.921.2 RC11: 19480 points, 325 FPS – SAPPHIRE Radeon HD 6970
– tess level: Extreme (X32) – Gain: +27.5%
|Cat11.6: 3397 points, 57 FPS – SAPPHIRE Radeon HD 6970
|Cat 8.921.2 RC11: 4334 points, 72 FPS – SAPPHIRE Radeon HD 6970
– tess level: Extreme (X64) – Gain: +80.6%
|Cat11.6: 594 points, 10 FPS – SAPPHIRE Radeon HD 6970
|Cat 8.921.2 RC11: 1073 points, 18 FPS – SAPPHIRE Radeon HD 6970
|Cat 8.921.2 RC11: 560 points, 10 FPS – SAPPHIRE Radeon HD 6970, TessMark renamed (toto.exe)
Indeed, there is a huge performance boost with high level of tessellation (X32 and X64) while scores for regular levels of tessellation (X8 and X16) remain the same. I hope I could test a HD 7970 shortly…
UPDATE (2012.01.24): I received some explanations from AMD: the performance improvement in tessellation is generic for applications with large tessellation factors (read greater than 16, the max being 64) but is not necessarily optimal for more reasonnable settings (like X8). That’s why AMD decided to enable the performance improvement on a per application basis.
I added a score with TessMark renamed in toto.exe to show the difference between the performance improvement enabled and disabled.
You can download Catalyst 8.921.2 RC11 for Radeon HD 7900 HERE.
Catalyst 8.921.2 RC11 is an OpenGL 4.2 driver with 233 OpenGL extension:
– Drivers Version: 8.921.2.0 – Catalyst 11.12 (1-19-2012)
– ATI Catalyst Release Version String: 8.921.2-120119a-132101E-ATI
– OpenGL Version: 4.2.11338 Compatibility Profile/Debug Context
– OpenGL Extensions: 233 extensions (GL=212 and WGL=21)
Compared to Catalyst 11.10 preview 3,
there is one new extension:
Here is the complete list of all OpenGL extensions exposed for a Radeon HD 6970 (Win7 64-bit):
Source: Geeks3D forum
10 thoughts on “(Test) AMD Catalyst 8.921.2 RC11 for Radeon HD 7900, Big Performance Boost in OpenGL Tessellation (*** Updated ***)”
The real question is… does it provide exactly the same quality as nv & radeons on older drivers or it’s just another buggy driver that gets it’s speedup because of rendering artifacts or some tess “optimizations”
Rename TessMark EXE and see what happens. 😉
The optimization is application specific profiling in the driver.
renamed tessmark and got:
(32x): 2516 (avg fps 42)
(64x):639 (avg fps 10)
(32x): 4100 (avg fps 68)
(64x): 739 (avg fps 13)
Radeon 5850 @936mhz core using 8.95 drivers and CAP 11.12 #3
I wonder what they did?
Possibly removed some dead code/unneeded variables?
JeGX, what happens if you tweak the shader code slightly so the hash changes, do the performance boosts remain?
It is a bit unfortunate if they are application specific tweaks, just means that they are expecting NVIDIA’s new hardware to be a lot faster.
isn’t it kinda weak to optimize drivers to an app according to its name? why dont they use at least some sort of checksums.. anyway why dont they optimize the drivers for all opengl tesselation apps.. probably a bit more effort would be paid back by some real overall impact on performace for all radeons
strange strange.. IM quite curious whats behind all of this
“the performance improvement in tessellation is generic for applications with large tessellation factors (read greater than 16, the max being 64) but is not necessarily optimal for more reasonnable settings (like X8)”
Shouldn’t they enable it then on a tessellation factor basis?
And if the switching for each draw command is too high, they should/could perhaps accumulate the average tess. factor of the running app and use that instead. Binary name hacks are so ’90s.
Tessellation factor can be computed dynamically in the shader. Every patch can have different tessellation factor. What you suggest would kill any performance benefit.
Sure the shader can override it, but everything is better than a binary name detection.
Yes I know about those commands but your solution just wouldn’t work. In many cases it would decrease performance because default tessellation factor would be completely different than the real one set by the shader. I agree that binary name detection is ugly hack but at least it works.
Yeah, I expected a c-interface command to define the max tessellation a shader can set, but only found that :/
Still the GPU (& its driver) should know the size of the current tessellation buffer and how often the buffer is too small to hold additional vertices (I assume this is what they optimized), cause it needs handle this case either by resizing the buffer or by pushing the current content further down the shader queue to make free space for the remaining vertices. In both cases I would assume that somewhere in the silicon this must either touch software/microcodes or it sets performance counters. And so the gfx drivers should be able to detect at runtime if it’s advisable to increase the default tessellation buffer size for the current program.
Still those are just assumptions, never the less it’s already hard enough to find the `fastpath` with current gfx drivers, and such things don’t make it easier. Neither do I think things like this makes maintaining the driver’s codebase easier, or do they run an artificial neural network or a genetic algorithm on their settings to find the optimum for a new app?
Don’t understand me wrong I don’t blame anyone, it’s just small part from an (non-end-)user’s perspective, who want that the driver works w/o support by the vendor, not less.
Comments are closed.