- 1 – Overview
- 2 – Quadro P5000 Gallery
- 3 – Quadro P5000 GPU Data
- 4 – Benchmarks: Quadro P5000 vs GeForce GTX 1080
- 4.1 – Preparation
- 4.2 – SPECviewperf 12.1
- 4.3 – LuxMark 3.1
- 4.4 – CineBench R15.0
- 4.5 – FurMark 1.19
- 4.6 – Unigine Superposition
- 4.7 – GeeXLab: heavy pixel shader test
- 4.8 – GeeXLab: two sided lighting test
- 5 – Burn-in Test
- 6 – Conclusion
4 – Benchmarks: Quadro P5000 vs GeForce GTX 1080
4.1 – Preparation
Here we are: the P5000 vs GTX 1080 battle.
– CPU: AMD Ryzen 7 1700 (default clock speed)
– Motherboard: MSI X370 Gaming Pro Carbon
– RAM: 16GB DDR4 Corsair Dominator Platinum @ 2666MHz
– PSU: Corsair AX860i
– Windows 10 64-bit Creators Update
– Driver: GeForce/Quadro 382.05
For this performance test, I used the following graphics tools:
– SPECviewperf 12.1
– LuxMark 3.1
– CineBench R15.0
– FurMark 1.19
– Unigine Superposition
– GeeXLab: heavy Shadertoy pixel shader test
– GeeXLab: old OpenGL two sided lighting test…
For the GTX 1080, I used the ASUS Strix GTX 1080. This card has a boost clock speed of 1936MHz and a memory clock speed of 5005MHz. In order to compare apples with apples, I downclocked the GTX 1080 with MSI Afterburner: -203MHz for the GPU core and -492MHz for the memory.
4.2 – SPECviewperf 12.1
The SPECviewperf test has no end… it lasts hours and hours… no I’m joking but it’s a long graphics benchmark. The GTX 1080 and the P5000 have similar results in 3dsMax and showcase-01 tests. The GTX 1080 is faster in Maya. But with Catia, creo-01, energy-01, medical-01, snx-02 (Siemens NX) and sw-03 (SolidWorks), the Quadro P5000 is much more faster than the GTX 1080. Especially in the snx-02 test which is a CAD test, the difference is just insane…
More information about SPECviewperf can be found HERE.
– GeForce GTX 1080 scores:
– Quadro P5000 scores:
The SNX-02 test:
4.3 – LuxMark 3.1
LuxMark is an OpenCL benchmark. There are 3 tests: light (LuxBall HDR), medium (Neumann TLM) and heavy (Hotel lobby).
The P5000 is slightly faster than the GTX 1080 in two tests and the GTX 1080 is faster in the heavy test…
– LuxBall HDR:
|12238 – Quadro P5000
|11286 – GTX 1080
– Neumann TLM:
|7143 – Quadro P5000
|6625 – GTX 1080
– Hotel lobby:
|3356 – GTX 1080
|3323 – Quadro P5000
4.4 – CineBench R15.0
CineBench is a graphics benchmark based on Cinema4D. Here are the OpenGL benchmark results:
|90.08 FPS – Quadro P5000
|87.6 FPS – GTX 1080
Like with LuxMark, there is no significant difference between the P5000 and the GTX 1080.
4.5 – FurMark 1.19
FurMark 1.19 can be download from THIS PAGE.
– Preset 1080
|6559 points / 109 FPS – GTX 1080
|6355 points / 106 FPS – Quadro P5000
There is no important difference between the P5000 and the GTX 1080, but the GTX 1080 is faster.
4.6 – Unigine Superposition
Superposition is the new graphics benchmark from the Unigine team. I covered it in THIS ARTICLE.
– 1080P Medium Direct3D
|14534 points / 108 FPS – GTX 1080
|14316 points / 107 FPS – Quadro P5000
The scores are similar to FurMark ones: no important difference but the GTX 1080 is faster.
4.7 – GeeXLab: heavy pixel shader test
This test is a pure pixel shader test with GeeXLab. I selected the Shadertoy Elephant demo with is very heavy. This pixel shader demo can be found in GeeXLab code sample pack in the gl-21/shadertoy-multipass/gl21-elephant/ folder.
– GeeXLab + Shadertoy Elephant multi-pass demo, 1280×720
|30 FPS – GTX 1080
|26 FPS – Quadro P5000
4.8 – GeeXLab: two sided lighting test
Here is an very interesting test I coded after the reading of this article. Two side lighting… Behind these three words, we find this old fixed pipeline function of OpenGL 1.2:
Many CAD softwares render meshes with illumination performed on both sides of a polygon. Each side can have different material properties (ambient, diffuse or specular).
I updated GeeXLab with the support of two sided lighting and I coded a simple demo that renders four times a cylinder (641’601 vertices and 1’280’000 faces) for a total of 2’566’404 vertices and 5’120’000 faces. There is one light (GL_LIGHT0) and of course there is no shader. Only old OpenGL fixed pipeline is used to render the scene.
And here are the results. I added a Quadro K4000 (Kepler) just for fun…
– Resolution: 1280×720
|1072 FPS – Quadro P5000
|188 FPS – Quadro K4000
|184 FPS – GTX 1080
– Resolution: 3840×2160
|650 FPS – Quadro P5000
|115 FPS – Quadro K4000
|30 FPS – GTX 1080
Ouch! For the low resolution (1280×720) the P5000 is around 7 times faster and for 4k resolution, the P5000 is around 20 times faster than the GTX 1080! Why??? If GPUs are the same for both cards, are there some functions disabled the GeForce version of the GP104? Or is it only a story of drivers? The Quadro driver has optimized render path for two sided lighting? According to all I read on the web, the GPU is the same and it’s just a story of very optimized drivers for the Quadro. It’s a shocking optimization! Even the old Kepler-based Quadro K4000 is faster than the Pascal-based GTX 1080…
If you want to play with this demo, you need the latest GeeXLab 0.15.0.1+ and the double sided lighting demo available in the code sample pack in the gl-21/two-sided-lighthing/ folder.
5 – Burn-in Test
Just quick (5min) burn-in test with FurMark 1.19. At idle, the testbed draws around 50W. Under FurMark, the testbed draws 250W. With a PSU efficiency factor of 0.9 we get: (250-50) x 0.9 = 180W. We can remove 10W for the CPU and we get a rough approximation of 170W for the P5000. At idle, the GPU temperature of the P5000 is 33°C. The max temperature during the 5min stress test was 80°C. With a longer stress test, the max GPU temperature would have reached 81 or 82°C.
6 – Conclusion
In this test I only covered the graphics performances (OpenGL, OpenCL) of the Quadro P5000 compared to the GeForce GTX 1080. The GeForce vs Quadro performances is a question I often saw on the web. The goal of this article isn’t to tell you to pick up a GeForce in place of a Quadro, the goal is a simple GPU battle. If you need features such as Quadro Sync, Quadro MOSAIC or ECC memory, or if the CAD software you work with, requires a Quadro accelerator, you have to buy a Quadro.
After this test here is what I can say: with normal and modern OpenGL or OpenCL applications, there is no real difference in performances between both graphics cards. Of course in applications that need huge amount of graphics memory (greater than 8GB), the P5000 will shine with its 16GB of GDDR5X memory, but for other modern OpenGL / Direct3D applications, there is no significant difference: for the same clock speeds, performances are more or less the same.
Now for professional applications from ISVs (Independent Software Vendors) like Autodesk, Siemens or Dassault Systemes (Catia), the P5000 offers important gains compared to the GTX 1080. These gains essentially come from optimized graphics drivers (see the two-sided lighting test as an example of a particular optimization) and optimized firmwares.
Thanks to the Internex team for the Quadro P5000 sample!
11 thoughts on “(Test) NVIDIA Quadro P5000 vs GeForce GTX 1080”
The Quadro has seven more OpenGL extensions than the GeForce.
Whats the extra mumbo-jumbo there?
Shame on me, I didn’t look at the GL extensions. I’m fixing this bug…
List of the 7 mysterious extensions:
Nice. Can you test also the CUDA Performace?
Just want to notice, what some OGL fixed pipeline functions like
in test “GeeXLab: two sided lighting test”
is software-limited on not quadro cards. (10-40 times performance loss)
To get around this we can use shaders.
Discussion and tests on nvidia/quadro and radeon you can find here http://www.gamedev.ru/code/forum/?id=192325
Is there a way to use the GTX 1080 drivers on the Quadro P5000? Thanks!
No. You think they’re stupid to let you do that? And also there are hardware and firmware differences.
And there’s another reason they block you from doing that. Quadro are used for 10-bit content editing. Geforce is locked out and can only output 8-bit.
Your score is messed up. You messed up with installing the drivers or settings because my Quadro P600 gets 107FPS in cinebench.
It was a great test with very interesting results. There are a lot of confusions and misunderstandings about Quadro graphic cards which this test was a light on those.
One thing grabbed my attention in SPECviewperf test. I noticed that all applications that using polygons (3ds Max, Maya and Showcase) are having almost the same results between P5000 and GTX 1080. However, in the applications that using NURBS surfaces by default and having a “tessellation” process in the background (CATIA, SolidWorks, NX, Creo), P5000 performs a lot better than GTX.
I’m using Autodesk Alias and SolidWorks and I experienced this lack of performance on GeForce cards before. Also, SolidWorks has a feature called Realview which it doesn’t work on GeForce graphic cards.
Thank you for this test. You did an awesome job.
a lot of others things are intentionally *removed* from geforce vs quadro drivers.
(appart from ECC support, both cards are 100% identical hardware )
– 10bits support : Nvidia is changing is mind there, geforce does have “partial” 10bit display (for example in full screen HDR games). but – for now – only quadro can display a windowed 10bit openGL frame
– famous error 42 when doing GPU passthrough in a virtual machine. only GRID and quadro card can be virtualized… so nice to have to pay 5x the price to enable virtualization!!
– TCC mode (telsa compute cluster) : a specific mode that allow a card to NOT use the windows WDDM driver (that consume a bunch of VRAM) + no need to hook a display to use the card + TCC mode is needed by high end CUDA software. all Titan card can be switched between WDDM and TCC mode… so why need to buy a 5x more expensive Tesla card, that is slower 🙂
– many OpenGL optimization are simply removed from the geforce driver
and *of course* you can not :
– install a geforce card with a quadro driver, and vice versa
– mix a geforce with a quadro card in the same system and use both drivers
– this is a intentional mess…
– edit : not sure yet what i’ve changed in the newest titanXP drivers, but competition from AMD vega FE is kicking in the ass of this bullshit politic of Nvidia :). currently only affecting openGL optimization
Comments are closed.