NARVis: Neural Accelerated Rendering for Real-Time Scientific Point Cloud Visualization

1University of Maryland, College Park, 2University of Maryland, Baltimore County, 3NASA, 4Hampton University
PacificVis 2026
Hurricane visualization GIF
Storms visualization GIF
MB visualization GIF

TLDR: NARVis enables high-quality visualization and stylization of large scientific point clouds (<1B points) at interactive frame rates using neural rendering.

Abstract

NARVis teaser figure

Exploring scientific datasets with billions of samples in real-time visualization presents a challenge - balancing high-fidelity rendering with speed. This work introduces a neural accelerated renderer, NARVis, that uses the neural deferred rendering framework to visualize large-scale scientific point cloud data. NARVis augments a real-time point cloud rendering pipeline with high-quality neural post-processing, making the approach ideal for interactive visualization at scale. Specifically, we render the multi-attribute point cloud using a high-performance multi-attribute rasterizer and train a neural renderer to capture the desired post-processing effects from a conventional high-quality renderer. NARVis is effective in visualizing complex multidimensional Lagrangian flow fields and photometric scans of a large terrain as compared to the state-of-the-art high-quality renderers. Extensive evaluations demonstrate that NARVis prioritizes speed and scalability while retaining high visual fidelity. We achieve competitive frame rates of >126 fps for interactive rendering of >350M points (i.e., an effective throughput of >44 billion points per second) using ~12 GB of memory on RTX 2080 Ti GPU. Furthermore, NARVis is generalizable across different point clouds with similar visualization needs and the desired post-processing effects could be obtained with substantial high quality even at lower resolutions of the original point cloud, further reducing the memory requirements.

Neural Accelerated Rendering Framework

Overview train figure
Training Phase
Overview infer figure
Inference Phase

In this work, we accelerate a given conventional renderer to visualize large-scale PCs in real-time. Specifically, we achieve this by learning how the view-space spatial distribution of points looks like given the color, depth, and additional point attributes that are rendered using a reference implementation of a high-quality glyph-based visualization technique. NARVis has two major components - a high-performance multi-attribute compute rasterizer (MACR) and a post-processing network. In the training phase, we provide flexibility to choose the high-quality renderer, as per the user's requirements, to create training datasets that NARVis learns to emulate. In the inference phase, MACR only needs to render the attributed points with depth testing from any position because the glyphs (geometry, orientation, texture, etc.) can be reconstructed for that view-space point/data distribution by the post-processing neural network in real-time. If the learned view-space distributions are representative enough to cover the different possible visualization configurations, each trained network can render arbitrary datasets with the same emulated glyph or visualization technique. Thus, after the offline training phase, we can deploy NARVis to render high-quality visualizations of the original or even a modified PC (such as a sub-sampled one) in real-time.

Nar deets figure

Detailed view of the NARVis architecture. We take the input view and the PC to output G-buffers for different point attributes of PC with an MACR, which processes each PC attribute in render and resolve stages (shown in the expanded view), and pass them as view-dependent multi-channeled features to our post-processing network, a U-Net, to generate final renderings. Before passing to the U-Net, we down scale and apply a 1 × 1-Convolution (retaining the number of channels) to the rendered G-buffers and concatenate them to the first U-Net blocks at the respective resolutions. (Green blocks: down scaled and 1 × 1 convolved multiple G-buffers, Pink blocks: U-Net block outputs.)

Results

Qualitative Comparison

Hurricane CR Hurricane NPBG Hurricane NARVis Hurricane GR
Hurricane CR 1 Hurricane NPBG 1 Hurricane Ours 1 Hurricane GSplat 1
Hurricane CR 2 Hurricane NPBG 2 Hurricane Ours 2 Hurricane GSplat 2
(a) CR (b) NPBG (c) NARVis / Ours (d) GR / GSplat

Glyph Stylization Support

Boxes Boxes NARVis Boxes GT Boxes Diff
Cones Cones NARVis Cones GT Cones Diff
Gaussians Gaussians NARVis Gaussians GT Gaussians Diff
(a) NARVis (b) GT (c) Diff

Effects of Point Cloud Resolution on Rendering Performance

Hurricane GR Hurricane 1x Hurricane 2x Hurricane 4x Hurricane 10x
Hurricane GSplat 1 Hurricane 1x 1 Hurricane 2x 1 Hurricane 4x 1 Hurricane 10x 1
Hurricane GSplat 2 Hurricane 1x 2 Hurricane 2x 2 Hurricane 4x 2 Hurricane 10x 2
(a) GSplat / GR (b) 1x (c) 2x (d) 4x (e) 10x

Generalizability

Within dataset source 1 Within dataset target 1 Within dataset GR 1
Within dataset source 2 Within dataset target 2 Within dataset GR 2
Within dataset source 3 Within dataset target 3 Within dataset GR 3
(a) Source PC (b) Target PC (c) GR

Generalizability Within Dataset. NARVis learns post-processing effects from a source PC and applies them to a target PC within the same dataset family despite variations in geometry and properties, with GR renderings shown for reference.

Performance and Speedup

Runtime and memory performance

NARVis balances rendering latency and quality and outperforms the other renderers in rendering quality across all the datasets. It is only slower than high-performance CR which has lower visual fidelity than NARVis as it renders points as pixels. GR has the highest image quality as it is our reference renderer. However, GR also has a high rendering latency due to the increased alpha blending overheads (involving sorting) in large PCs. NPBG, an NN based method, has better rendering quality than CR but is memory intensive and slow as it uses per-point descriptors. NPBG also has a higher latency as its OpenGL rasterizer uses GL_POINTS primitive to render frames at multiple image resolutions, making it unscalable for real-time rendering of large full-resolution PCs.

Video Results

BibTeX

@inproceedings{hegde2026narvis,
  author    = {Hegde, Srinidhi and Kullman, Kaur and Grubb, Thomas and Lait, Leslie and Guimond, Stephen and Zwicker, Matthias},
  title     = {NARVis: Neural Accelerated Rendering for Real-Time Scientific Point Cloud Visualization},
  booktitle = {2026 IEEE Pacific Visualization Conference (PacificVis)},
  year      = {2026},
  organization = {IEEE}
}