Point based rendering on the GPU



Introduction

Rendering highly complex models results in triangles whose projected area is less than a few pixels. Using standard scanline-conversion methods for the rendering of these tiny triangles becomes inefficient because of the necessary overhead for the triangle setup. Therefore, above a certain complexity, points are the conceptually more efficient rendering primitive. Holes in the rendered image (e.g. when zooming in) can be avoided by image-based filters, by adjusting the sampling density, or by so-called surface splatting. In this case each point is associated with a radius and a normal vector and therefore represents a small disc in 3-space, that is projected onto the image plane.


Point-based rendering of large meshes.








Assignment overview

Implement a method for point based rendering on graphics hardware. Good starting points are the papers provided here.

Input meshes/point samples can be found on Stanford’s 3D scanning repository page.

Your program should allow loading and point-based visualization of triangle meshes and unorganized point clouds. Additionally, the user should be able to adjust the parameters of your visualization algorithm.