Uncertainty-Aware Diffusion-Guided Refinement of 3D Scenes

ICCV 2025

University of California, Riverside

Input view, Flash3D result, UAR-Scenes result, and ground-truth comparison.
UAR-Scenes improves novel views from a single input image by preserving visible regions while producing plausible completions for unseen scene content.

Video Presentation

Abstract

Reconstructing 3D scenes from a single image is a fundamentally ill-posed task due to the severely under-constrained nature of the problem. Consequently, when the scene is rendered from novel camera views, particularly in unseen regions far away from the input camera, existing single image to 3D reconstruction methods render incoherent and blurry views. In this work, we address these inherent limitations in existing single image-to-3D scene feedforward networks. To alleviate the poor performance due to insufficient information beyond the input image's view, we leverage a strong generative prior in the form of a pre-trained latent video diffusion model, for iterative refinement of a coarse scene represented by optimizable Gaussian parameters. To ensure that the style and texture of the generated images align with that of the input image, we incorporate on-the-fly Fourier-style transfer between the generated images and the input image. Additionally, we design a semantic uncertainty quantification module which calculates the per-pixel entropy and yields uncertainty maps which are used to guide the refinement process from the most confident pixels while discarding the remaining highly uncertain ones. We conduct extensive experiments on real-world scene datasets, including in-domain RealEstate-10K and out-of-domain KITTI-v2, showing that our approach can provide more realistic and high-fidelity novel view synthesis results compared to existing state-of-the-art methods.

Motivation

Single-image 3D reconstruction models are fast and convenient, but the input image simply cannot reveal what lies behind occlusions or beyond the observed camera frustum. As the camera moves away from the input view, deterministic reconstructions tend to average uncertain content, causing blurred geometry, missing structures, and inconsistent textures.

UAR-Scenes asks whether a generative prior can supply plausible missing information without corrupting the parts of the scene that were already observed. The answer is a post-hoc refinement loop: keep the reliable structure from the base reconstruction and learn from diffusion-generated views only where the generated supervision is semantically confident.

Method Overview

Pipeline diagram showing Flash3D initialization, diffusion pseudo-view generation, uncertainty estimation, and Gaussian refinement.
Pipeline: coarse Gaussians from a feed-forward reconstructor are refined using camera-controlled LVDM pseudo views, uncertainty maps, adaptive densification and pruning, and style-aligned reconstruction losses.
1

Initialize the Scene

A pre-trained single-image reconstruction model produces noisy Gaussian parameters that serve as the optimization target.

2

Generate Pseudo Views

A camera-controlled latent video diffusion model samples novel views for target poses, providing 2D supervision where no ground truth is available.

3

Measure Confidence

MLLM-assisted object tags and open-vocabulary segmentation produce per-pixel entropy maps that identify uncertain generated regions.

4

Refine Gaussians

Uncertainty-weighted losses and Fourier style transfer guide optimization toward confident, texture-consistent completions.

Uncertainty-Guided Supervision

Uncertainty quantification pipeline using object tagging, segmentation, entropy, and confidence-weighted refinement.
Semantic uncertainty quantification converts generated pseudo views into confidence-aware supervision for Gaussian refinement.
Pseudo image and uncertainty map showing confident and uncertain regions.
Uncertainty maps emphasize reliable regions while suppressing ambiguous or blurry pseudo-view content.

Qualitative Results

Qualitative comparison across input view, Flash3D, UAR-Scenes, and ground truth.
Compared with Flash3D, UAR-Scenes better completes unseen windows, boundaries, and scene structure while keeping renderings faithful to the input.
Additional qualitative comparison examples.
Additional examples show improved realism and consistency across indoor, outdoor, and driving scenes.

Quantitative Results

RealEstate-10K Novel View Synthesis

Method 5f PSNR 5f SSIM 5f LPIPS 10f PSNR 10f SSIM 10f LPIPS Wide PSNR Wide SSIM Wide LPIPS
MINE28.450.8970.11125.890.8500.15024.750.8200.179
Flash3D28.460.8990.10025.940.8570.13324.930.8330.160
UAR-Scenes28.670.9020.09526.540.8610.11227.810.8870.107

RealEstate-10K Interpolation vs. Extrapolation

Method Int. PSNR Int. SSIM Int. LPIPS Ext. PSNR Ext. SSIM Ext. LPIPS Ext. FID
PixelNeRF24.000.5890.55020.050.5750.567160.77
Du et al.24.780.8200.41021.230.7600.48014.34
pixelSplat25.490.7940.29122.620.7770.2165.78
latentSplat25.530.8530.28023.450.8010.1902.97
MVSplat26.390.8690.12824.040.8120.1853.87
Flash3D23.870.8110.18524.100.8150.1854.02
UAR-Scenes26.370.8710.12524.370.8190.1442.55

Out-Domain Evaluation on KITTI-v2

Method PSNR SSIM LPIPS
LDI16.500.572-
SV-MPI19.500.733-
BTS20.100.7610.144
MINE21.900.8280.112
Flash3D21.960.8260.132
UAR-Scenes22.310.8440.128

Ablations

Ablation results comparing Flash3D, LVDM, LVDM with Fourier style transfer, and UAR-Scenes.
Fourier style transfer reduces over-saturated diffusion textures, and uncertainty weighting further improves the final refined rendering.
Model LVDM FST Uncertainty PSNR SSIM LPIPS
Baseline×××24.930.8330.160
Baseline + LVDM××27.240.8670.126
Baseline + LVDM-FST×27.330.8690.119
UAR-Scenes27.810.8870.107

BibTeX

@inproceedings{bose2025uncertainty,
  title={Uncertainty-Aware Diffusion-Guided Refinement of 3D Scenes},
  author={Bose, Sarosij and Dutta, Arindam and Nag, Sayak and Zhang, Junge and Li, Jiachen and Karydis, Konstantinos and Roy-Chowdhury, Amit K.},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2025}
}