Here a short but detailed whitepaper about floating point on NVIDIA GPUs. Understanding floating point accuracy and compliance (IEEE 754) is important for GPU computing developers. The paper is focused on CUDA but should help OpenCL developers as well.
In this whitepaper, you will learn:
- How the IEEE 754 standard fits in with NVIDIA GPUs
- How fused multiply-add (FMA) improves accuracy
- There’s more than one way to compute a dot product (we present three)
- How to make sense of different numerical results between CPU and GPU
You can download the whitepager HERE (7-page PDF).
NVIDIA has extended the capabilities of GPUs with each successive hardware generation. Current generations of the NVIDIA architecture such as Tesla C2xxx, GTX 4xx, and GTX 5xx, support both single and double precision with IEEE 754 precision and include hardware support for fused multiply-add in both single and double precision. Older NVIDIA architectures support some of these features but not others. In CUDA, the features supported by the GPU are encoded in the compute capability number.
Devices with compute capability 2.0 and above support both single and double precision IEEE 754 including fused multiply-add in both single and double precision. Operations such as square root and division will result in the floating point value closest to the correct mathematical result in both single and double precision, by default.
You can use a tool like GPU Caps Viewer to know the compute capability of your graphics card.
For example, a GeForce GTX 560 has a compute capability of 2.1. More details about CUDA compute capabilities can be found HERE.
2 thoughts on “Floating Point and IEEE 754 Compliance for NVIDIA GPUs”
Gotta love floating point numbers… you have to be so careful with what you do to ensure the rounding errors don’t accumulate… and with GPUs that can do lots of iterations really fast it doesn’t take much…
So, ATI’s GC has support for this?
Comments are closed.