In this work, we accelerate a given conventional renderer to visualize large-scale PCs in real-time. Specifically, we achieve this by learning how the view-space spatial distribution of points looks like given the color, depth, and additional point attributes that are rendered using a reference implementation of a high-quality glyph-based visualization technique. NARVis has two major components - a high-performance multi-attribute compute rasterizer (MACR) and a post-processing network. In the training phase, we provide flexibility to choose the high-quality renderer, as per the user's requirements, to create training datasets that NARVis learns to emulate. In the inference phase, MACR only needs to render the attributed points with depth testing from any position because the glyphs (geometry, orientation, texture, etc.) can be reconstructed for that view-space point/data distribution by the post-processing neural network in real-time. If the learned view-space distributions are representative enough to cover the different possible visualization configurations, each trained network can render arbitrary datasets with the same emulated glyph or visualization technique. Thus, after the offline training phase, we can deploy NARVis to render high-quality visualizations of the original or even a modified PC (such as a sub-sampled one) in real-time.
Detailed view of the NARVis architecture. We take the input view and the PC to output G-buffers for different point attributes of PC with an MACR, which processes each PC attribute in render and resolve stages (shown in the expanded view), and pass them as view-dependent multi-channeled features to our post-processing network, a U-Net, to generate final renderings. Before passing to the U-Net, we down scale and apply a 1 × 1-Convolution (retaining the number of channels) to the rendered G-buffers and concatenate them to the first U-Net blocks at the respective resolutions. (Green blocks: down scaled and 1 × 1 convolved multiple G-buffers, Pink blocks: U-Net block outputs.)