NVIDIA has unveiled today, during the GTC 2016 (GPU Technology Conference) the Tesla P100, a new computing platform based on NVIDIA new GPU architecture: Pascal GP100.
Today at the 2016 GPU Technology Conference in San Jose, NVIDIA CEO Jen-Hsun Huang announced the new NVIDIA Tesla P100, the most advanced accelerator ever built. Based on the new NVIDIA Pascal GP100 GPU and powered by ground-breaking technologies, Tesla P100 delivers the highest absolute performance for HPC, technical computing, deep learning, and many computationally intensive datacenter workloads.
An introduction article has been published about Tesla P100 and Pascal GP100 GPU HERE. NVIDIA Tesla P100 homepage can be found HERE.
In few words, here are the main features of the GP100 GPU:
- a GP100 is built with TSCM 16nm Fin-FET manufacturing process and has around 15.3 billion transistors!
- a full GP100 GPU has 60 SMs (Streaming Multiprocessors) and each SM has 64 CUDA cores. So a full GP100 packs 3840 CUDA cores, running @ 1328 MHz (base clock) and 1480 MHz (boost clock).
- a full GP100 has 240 texture units.
- a GP100 works with HBM2 (High Bandwidth Memory 2) memory on an 4096-bit memory interface. 16GB of HBM2 is the max amount of GPU memory.
- the GP100 is a real monster in FP64 computing: more than 5300 GFLOPS! Check the GFLOPS table to feel the power 😉 – what’s more, the FP32 / FP64 ratio is 2:1 or FP64 = 1/2 FP32.
- the TDP is 300 Watts.
- the GP100 supports the new Compute Capability 6.0.
The Tesla P100 is built on a cut down GP100: only 3584 CUDA cores (56 SMs) and 224 texture units.
Pascal GP100 GPU block diagram amde up of 60 Streaming Multiprocessors
NVIDIA Pascal GP100 GPU Streaming Multiprocessor
4 thoughts on “(GTC 2016) NVIDIA Pascal GP100 GPU and Tesla P100 Computing Platform Announced”
Swiss National Super-computing Centre upgrades to Pascal
Whitepaper date stamped April 15th at P100 site
Don’t remember if there was an older version before…
Nvidia GP104 event tomorrow.
GeForce GTX 1080 officially announced.
Comments are closed.