Here is a voxel splatting demo based on OpenCL + OpengL. Splatting is the technique that allows to merge all voxels on screen to create a smooth surface. The voxels are rendered as single points and the gap between points is filled by an OpenCL post processing filter.
Since there are still lots of rumors around about unlimited detail, I wanted to give point based rendering at try and see how well it works in OpenCL.
It turned out that the performance is much better than basic point rendering in OpenGL. While OpenGL allowed me to render around 630M pts / sec, OpenCL reached 3-4 times the speed with ~2 Billion points per second on a GTX580M GPU.
Just rendering points does not lead to a smooth surface however. For that, a post-processing filter is required. It increases the size of the points and fills the holes. I have implemented a very simple one, to show that it works. As for the culling, only frustum culling is implemented. More advanced hierarchic occlusion culling might give extra frames especially in indoor scenes. Also a hierarchic depth buffer is not been used – it might further give additional performance when adapted to point based rendering.
Main challenge for the implementation was to find a data structure that allows parallel access by stil keeping the size per voxel reasonable. Rendering the points works pretty much straight forward.
In the image, you can see the render stages:
Right : Z buffer
Middle : Colored Points
Left : Including post processing
Render Resolution: 1024×768
Framerate: 30-40 fps
Scene Dimension: 20k x 1k x 20k voxels
Dataset : 1024x1024x1024 Voxels (single instance)
Data size per voxel: 4 bytes / voxel at each LOD, total ~5 bytes, seen over all LODs
On the author’s notebook with a GTX 580M, the demo runs at around 40 FPS for 2 billion of points/second. I tested the demo with default settings (1024×768) on my GTX 680 (R306.02) and I get around 80FPS (4 billion points/sec?):
You can download the demo from this page.
9 thoughts on “OpenCL Voxel Splatting Demo”
Yep looks nearly as crappy as the unlimited detail guys stuff
@DrBalthar well, I guess you’re missing “the point”: point-based rendering is very popular for GI effects. a little blurring here and there, or bilateral upsampling. Everybody would love to see PointBased GI for outdoor scenes with millions of points at 120 Hz… one step at a time
I am not missing the point, point-based rendering are crap when you try to reproduce sharp edges no amount of filtering will ever get you there.
Unfortunately most man made objects have SHARP edges. In terms of GI it doesn’t matter as it is a secondary effect that will not be picked up by our visual system. Unless they are caustics which again can have sharp edges. Put for primary visibility points are duff!
@DrBalthar: take a look at minecraft. those voxels are a magnitude larger and ppl love it
@DrBalthar “Filtering”..hehe.. Point set surfaces have, for a long time, been used to efficiently represent sharp features without requiring heavy oversampling. There are many references and implementations doing this, for example in recent work: http://hal.inria.fr/docs/00/35/49/69/PDF/RIMLS_eg09.pdf
Also, in todays renderers, what is to stop point-based approaches being used for distant models – which appears to be (from excellent research such as this) where they excel.
As for GI, please read up on the STAR by http://web4.cs.ucl.ac.uk/staff/j.kautz/publications/GISTAR_CGF12.pdf – you will find PBGI methods (in particular BSH) are an extremely relevant structure for next-generation renderers allowing multiple cuts from many different views quickly, while still allowing for quick updates e.g. in the case of dynamic scenes http://perso.telecom-paristech.fr/~boubek/papers/ManyLoDs/ManyLoDs.pdf
P.S I am not defending Euclideon, whom appear to have just built a top-down static hierarchical renderer of direct light using some theory borrowed from QSplat. I’m defending point-based approaches in general, and their relevant research such as this, while not even mentioning some of the many other advantages they bring (e.g. accurate hierarchical deformation http://graphics.stanford.edu/~mapauly/Pdfs/ShapeModeling.pdf , seamless texture mapping, etc).
Guys, have personaly see a 2 bil Point cloud viewed and “manipulated” – rotated around a scene on Intel 4core and 4 GB ram with GTX280 around 2,5 year back in Holland, so if dont believe then is your thing, but is it possible…… on especial software that was aquired by Autodesk 8 monts back and they want implement it in their SW packages. they implemented a solution that a PoiCloud was rendered as a mesh, flat surface and etc……. so Unlimited technology CAN work and sooner or leter will be aviable if you believe or not….
Yep and people love 8bit block graphics games not sure what your point is?
@SirPhDCP I didn’t say point based graphics are bad per se I just don’t think it is a great solution for doing everything. And I still haven’t seen any real convincing aliasing and filtering solution when using textures
And here is it a PUBLIC demo of EUCLIDON on BIG INDUSTRY….
How did they do the z-test in software so fast? Must there be a crappy slow mutex texture? Does they get the speedup through clever instancing?
Comments are closed.