DIVeR: Real-time and Accurate Neural Radiance Fields
with Deterministic Integration for Volume Rendering
Abstract
DIVeR builds on the key ideas of NeRF and its variants – density models and volume rendering – to learn 3D object models that can be rendered realistically from small numbers of images. In contrast to all previous NeRF methods, DIVeR uses deterministic rather than stochastic estimates of the volume rendering integral. DIVeR’s representation is a voxel based field of features. To compute the volume rendering integral, a ray is broken into intervals, one per voxel; components of the volume rendering integral are estimated from the features for each interval using an MLP, and the components are aggregated. As a result, DIVeR can render thin translucent structures that are missed by other integrators. Furthermore, DIVeR’s representation has semantics that is relatively exposed compared to other such methods – moving feature vectors around in the voxel space results in natural edits. Extensive qualitative and quantitative comparisons to current state-of-the-art methods show that DIVeR produces models that (1) render at or above state-of-the-art quality, (2) are very small without being baked, (3) render very fast without being baked, and (4) can be edited in natural ways. Our real-time code is available at: https://github.com/lwwu2/diver-rt
1 Introduction
Turning a small set of images into a renderable model of a scene is an important step in scene generation, appearance modeling, relighting, and computational photography. The task is well-established and widely studied; what form the model should take is still very much open, with models ranging from explicit representations of geometry and material through plenoptic function models [Adelson1991ThePF]. Plenoptic functions are hard to smooth, but neural radiance field (NeRF) [mildenhall2020nerf] demonstrates that a Multi Layer Perceptron (MLP) with positional encoding is an exceptionally good smoother, resulting in an explosion of variants (details in related work). All use one key trick: the scene is modeled as density and color functions, rendered using stochastic estimates of volume rendering integrals. We describe an alternative approach, deterministic integration for volume rendering (DIVeR), which is competitive in speed and accuracy with the state of the art.
NeRF | PlenOctrees |
DIVeR (Ours) | Ground Truth |
We use a deterministic integrator because stochastic estimates of integrals present problems. Samples may miss important effects (Fig. 1). Fixing this by increasing the sampling rate is costly: accuracy improves slowly in the number of samples , (for Monte Carlo methods, standard deviation goes as [boyle]), but the cost of computation grows linearly. In contrast, our integrator combines per-voxel estimates of the volume rendering integral into a single estimate using alpha blending (Sec. 4.2).
Like NSVF [liu2020neural], we use a voxel based representation of the color and density. Rather than represent functions, we provide a feature vector at each voxel vertex. The feature vectors at the vertices of a given voxel are used by an MLP to compute the deterministic integrator estimate for the section of any ray passing through the voxel. Mainly, a model is learned by gradient descent on feature vectors and MLP parameters to minimize the prediction error for the training views; it is rendered by querying the resulting structure with new rays. Sec. 4 provides the details.
Given similar computational resources, our model is efficient to train, likely because the deterministic integration can fit the integral better, and there is no gradient noise produced by stochastic integral estimates. As Sec. 5 shows, the procedure results in very small models () which render very fast ( FPS on a single 1080 Ti GPU) and have comparable PSNR with the best NeRF models.
2 Background
NeRF [mildenhall2020nerf] represents 3D scenes with a density field and a color field which are functions of 3D position and view direction encoded by an MLP with weights . To render a pixel, a ray is shot from the camera center through the pixel center in direction and follows the volume rendering equation [10.1145/964965.808594] to accumulate the radiance:
(1) |
Closed-form solutions for Eq. 1 are not available, so NeRF uses Monte Carlo integration by randomly sampling points along the ray from eye to far with their radiance and density values . The radiance and density function is then treated as constant in every interval , and an approximation of the volume rendering equation is given as:
(2) | |||
(3) |
where denotes the accumulated alpha values along the interval and is the interval length. During training, NeRF learns to adjust the density and color fields to produce training images, which is achieved by optimizing weights in respect to squared error between rendered pixel and its ground truth:
(4) |
3 Related Work
Novel view synthesis:
All scene modeling methods attempt to exploit regularities in the plenoptic function (the spectral radiance traveling in any direction at any point) of a scene. One approach is to compute an explicit geometric representations (point clouds [aliev2020neural, 9064947]; meshes [Riegler2020FVS, Riegler2021SVS, 10.1145/3306346.3323035]) usually obtained from some 3D reconstruction algorithms (e.g. COLMAP [schoenberger2016sfm]). The geometry can then carry deep features, which are projected and then processed by neural networks to get the image. Building an end-to-end optimizable pipeline is hard, however. Alternatively, one could use voxel grids[Lombardi:2019, rematasCVPR20, sitzmann2019deepvoxels], where scene observations are encoded as 3D features and processed by 3D, 2D Convolutional Neural Networks (CNNs) to get the rendered images, yielding cleaner training but a very memory intensive model which is unsuitable for high resolution rendering.
Multi-plane images (MPIs) [Zhou2018StereoM, Srinivasan2019PushingTB, Mildenhall2019LocalLF, Wizadwongsa2021NeX] offer novel views without requiring a precise geometry proxy. One represents a scene as a set of parallel RGBA images and synthesizes a novel view by warping the images to the target view then alpha blending them; this fails when the view changes are too large. Image based rendering (IBR) approaches [Wang2021IBRNetLM, chen2021mvsnerf, SRF] render a view by interpolating the nearby observations directly. Most IBR methods generalize well to unseen data, such that a new scene in an IBR model can be rendered off-the-shelf or with a few epochs of fine-tuning.
An alternative is to represent proxies for the plenoptic function using neural networks, then raycast. [sitzmann2019srns, yariv2020multiview, Kellnhofer:2021:nlr] use signed distance field like functions, and [Saito2019PIFuPI, Niemeyer2020DifferentiableVR] represent the scene geometry as an occupancy field. NeRF [mildenhall2020nerf] models the plenoptic function using an MLP to encode a density field (of position) and a color field (of position and direction). The radiance at a point in a direction is given by a volume rendering integral [10.1145/964965.808594]. Training is done by adjusting the MLP parameters to produce the right answer for a given set of images. The method can produce photo-realistic rendering on complex scenes, including rendering transparent surfaces and view dependent effects, but takes a long time to train and evaluate.
NeRF has resulted in a rich collection of variants. [boss2021nerd, nerv2021, Zhang2021NeRFactorNF] modify the NeRF to allow control of surface materials and lighting; NeRF-W [martinbrualla2020nerfw] augments the inputs with image features to help resolve ambiguity between photos in the wild. [park2021nerfies, pumarola2020d, park2021hypernerf] show how to model deformation, and [li2020neural, xian2021space, du2021nerflow, Gao-freeviewvideo, li2021neural] apply NeRF to 4D videos. Finally, [Wang2021IBRNetLM, SRF, chen2021mvsnerf, yu2020pixelnerf, tancik2020meta] try to improve the generalizability and training speed, and [Chan2021piGANPI, Schwarz2020NEURIPS, grf2020, rematasICML21, Niemeyer2020GIRAFFE] adopt the architecture to generative models.
Rendering NeRF faster:
NeRF’s stochastic integrator not only misses thin structures (which are hard to find with samples, Figure 1), but also presents efficiency problems. The main strategy for improving the efficiency of NeRF (as in any MC integrator) is coming up with better importance functions [boyle]. NSVF [liu2020neural] significantly reduces the number of samples (equivalently, MLP calls; render time) by imposing a voxel grid on the density, then pruning voxels with empty density at training time. An alternative is a depth oracle that ensures that MLP samples occur only close to points with high density [neff2021donerf]. AutoInt [autoint2021] further offers a more efficient estimator for the volume rendering integral by constructing an approximate antiderivative (though the absorption integral must still be approximated), which allows fewer MLP queries at rendering time.
But pure importance based methods cannot render in real-time, because they rely on MLPs that are relatively expensive to evaluate. FastNeRF [garbin2021fastnerf] discretizes continuous fields into bins and caches the bins that have been evaluated for subsequent frames. PlenOctrees [yu2021plenoctrees] and SNeRG [hedman2021baking] pre-bake the results of the NeRF into sparse voxels and use efficient ray marching to achieve interactive frame rate. These methods achieve real-time rendering at a cost of noticeable loss in quality (ours does not), or of requiring a high-resolution voxel grid and so a large storage cost (ours does not). Alternative strategies include: caching MLP calls into MPIs (Nex [Wizadwongsa2021NeX]); and speeding up MLP evaluation by breaking one MLP into many small local specialist MLPs (KiloNeRF [reiser2021kilonerf]). In contrast, we use the representation and MLP obtained at training time.
4 Method
As shown in the overall rendering pipeline (Fig. 2), our DIVeR method differs from the NeRF style models in two important ways: (1) we represent the fields as a voxel grid of feature vectors , and (2) we use a decoder MLP with learnable weight (Fig. 6) to design a deterministic integrator to estimate partial integrals of any fields of the scene. To estimate the volume rendering integral for a particular ray, we decompose it into intervals corresponding to the voxels the ray passes through. We then let each interval report an approximate estimate of the voxel’s contribution and accumulate them to the rendering result. The learning of the fields is done by adjusting and to produce close approximations of the observed images.
4.1 Voxel based deep implicit fields
As in NSVF [liu2020neural], the feature vectors are placed at vertices of the voxel grid; feature values inside each voxel are given by the trilinear interpolation of the voxel’s eight corners, which yields a piecewise trilinear feature function . The voxel grid can be thought of as a 3D cache of intermediate sums of NeRF’s MLP, which explains why inference should be fast (Sec. 5.3) but still can model complicated spatial behaviors compactly (Sec. 5.2). Because voxels in the empty space make no contribution to the volume rendering, the voxel grid can also be stored in a sparse representation (Sec. 4.5), which further speeds up the rendering and reduces the storage cost (Sec. 5.3).
Initializing voxel features using implicit MLP:
If each is trained independently and randomly initialized, our representation scheme tends to overfit during the training (Fig. 3). This suggests that the optimization of each should be correlated, but it is not obvious which correlation strategy should be applied. Instead, we take an MLP that accepts the positional encoded vertex position on the voxel grid to output the feature vector at that position (the implicit MLP, with parameters ; see Fig. 4) to correlate each feature vector implicitly. Although an MLP can in principle approximate any function, there is overwhelming experimental evidence that the approximated function tends to be smooth (e.g. [Wizadwongsa2021NeX]), which makes it unsuitable for rendering high frequency details. Therefore, we first train the implicit MLP to generate a reasonable initialization of placed in the corresponding voxel grid vertex, then discard the regularization MLP and directly optimize on explicitly. Experiments show this ‘implicit-explicit’ strategy prevents overfitting while preserving high-frequency contents.
4.2 Feature integration
Intersecting a ray with the voxel grid yields a set of intervals, which are processed separately by our integrator. Write for parameter values defining these intervals, from eye to far end. For interval , we obtain density and radiance by passing the normalized integral of along the interval to the MLP. Let be the feature vectors at corners of the voxel the interval passes through and be the corresponding trilinear interpolation weights, so:
(5) | |||
(6) |
Here are the learnable weights of the MLP, and we incorporate viewing direction to model the view dependent effect. These approximations are accumulated into a single value of the integral by
(7) | |||
(8) | |||
(9) |
(which is an approximation of Eq. 1, cf. [autoint2021, Max95]). Notice that, if the MLP had no hidden layers, and the integrand was a known function, we would be adjusting components of a basis function expansion of the integrand to produce the approximation.
Monte Carlo | Feature integration |
Our integrator has two advantages over MC. First, we get a slightly better estimate per interval (the MC estimate assumes fields inside an interval are constant; ours fits them using an MLP; see Fig. 5), and this manifests in better rendering quality (Sec. 5.4). Because the integrator is deterministic, the error in integral estimates is deterministic, and so is the gradient, which may help learning; our experience has been that our method has vanishing gradients less often than standard NeRF, and is less sensitive to the choice of learning rate.
DIVeR64 |
DIVeR32 |
4.3 Architecture
We choose the feature dimension to be 32, and the voxel grid size varies according to the target image resolution. The grid is relatively coarse and can be represented very efficiently with a sparse representation (Sec. 5.1). As shown in Fig. 6, we investigate two different MLP decoders: DIVeR32 and DIVeR64. Similar to [mildenhall2020nerf], we apply positional encoding to the viewing direction , but we directly pass the integrated feature into the MLP without positional encoding. We use 10 bands for positional encoding in the implicit regularization MLP and 4 bands in the decoder MLP. Because the architectures are tiny, one call of MLP takes less than 1ms, which allows MLP evaluation to happen in real-time.
4.4 Training
We optimize , , and for each scene. During a training step, we randomly sample a batch of rays from the training set and follow the procedure described in Sec. 4.2 to render the color, and then apply gradient descent on and using Eq. 4. We want the voxel grid to be sparse, and so discourage the model from predicting background color in empty space using the regularization loss of [hedman2021baking]:
(10) |
where denotes the th accumulated density; is the regularization weight. In contrast to NeRF, we do not need hierarchical volume sampling because we use deterministic integration.
Coarse to fine:
We speed up training with a coarse to fine procedure. Early in training, it is sufficient to use coarse resolution images to determine whether particular regions are empty using the culling strategy discussed in Sec. 4.5. Based on the coarse occupancy map, we then train high resolution images and efficiently skip the empty space. When we do so, we discard the features and MLP weights trained on the coarse images (which are trained to ignore fine details).
4.5 Inference time optimization
To avoid querying voxels that have no effect on the image (empty voxels; occluded voxels) we follow [yu2021plenoctrees] by recording maximum blended alpha for each voxel from training views, and then culling all voxels with maximum blended alphas below the threshold . This culls 98% of voxels on average but preserves transparent surfaces. We cull after the coarse training step (to accelerate fine-scale training) and then again after fine-scale training.
To avoid working on voxels occluded to a certain camera view, we evaluate intervals from the eye and stop working on a ray when a transmittance estimate falls below a threshold (). Furthermore, if an interval’s alpha is below , there is no need to evaluate color.
In contrast to other voxel based real-time applications, we do not need to convert the trained model (so there is no precision loss, etc. from discretizing the model). While in principle, our inference time optimizations must result in loss of accuracy, the results of Sec. 5.3 suggest that this loss is negligible.
5 Experiments
We evaluate using both the offline rendering task (FPS20) and the real-time rendering task (FPS20). We use the NeRF-synthetic dataset [mildenhall2020nerf] (synthetic images of size with camera poses); a subset of the Tanks and Temples dataset [10.1145/3072959.3073599] and the BlendedMVS dataset [yao2020blendedmvs] (chosen by NSVF authors [liu2020neural]). Tanks and Temples images are ; BlendedMVS images are . Backgrounds in both datasets are cropped by NSVF. The qualitative results of our experiments can be seen in Fig. 7 and Fig. 8. In all quantitative measurements, we mark the best result by bold font and second best by italic font with underline.
5.1 Implementation detail
Training:
We use PyTorch [NEURIPS2019_9015] for network optimization and customized CUDA kernels to accelerate ray-voxel intersection. For high resolution image training, both implicit and explicit models use a voxel grid of size for NeRF-synthetic and BlendedMVS, and for Tanks and Temples. For coarse model training, we take the voxel grid and images at 1/4 of the fine model scale. We follow NSVF’s [liu2020neural] strategy to sample rays from the training set, and we choose a batch size of 1024 pixels for coarse training, 6144 for fine training of Tanks and Temples, and 8192 for fine training of other datasets. The coarse model is trained for 5 epochs first explicitly, then we train the model with implicit MLP until the validation loss has almost converged. Finally, we train the explicit grid initialized from the implicit model and stop the training when the total training time reaches 3 days. In total, the peak GPU memory usage is around 40GB. We use the Adam [Kingma2015AdamAM] optimizer with a learning rate of 5e-4 for the fine model, 1e-3 for the coarse model, and 1e-5 in the sparsity regularization loss.
Method | PSNR | SSIM | LPIPS | |
---|---|---|---|---|
NeRF-Synthetic | NeRF [mildenhall2020nerf] | 31.00 | 0.947 | 0.081 |
JaxNeRF [jaxnerf2020github] | 31.65 | 0.952 | 0.051 | |
AutoInt [autoint2021] | 25.55 | 0.911 | 0.170 | |
NSVF [liu2020neural] | 31.74 | 0.953 | 0.047 | |
DIVeR64 | 32.32 | 0.960 | 0.032 | |
BlendedMVS | NeRF [mildenhall2020nerf] | 24.15 | 0.828 | 0.192 |
JaxNeRF [jaxnerf2020github] | - | - | - | |
AutoInt [autoint2021] | - | - | - | |
NSVF [liu2020neural] | 26.90 | 0.898 | 0.113 | |
DIVeR64 | 27.25 | 0.910 | 0.073 | |
Tanks & Temples | NeRF [mildenhall2020nerf] | 25.78 | 0.864 | 0.198 |
JaxNeRF [jaxnerf2020github] | 27.94 | 0.904 | 0.168 | |
AutoInt [autoint2021] | - | - | - | |
NSVF [liu2020neural] | 28.40 | 0.900 | 0.153 | |
DIVeR64 | 28.18 | 0.912 | 0.116 |
Real-time application:
Our real-time application is implemented by using CUDA and Python, with all the operations being parallelized per image pixel. For each frame and each pixel, ray marching finds a fixed number of hits on the voxel grid; the MLP is evaluated for each hit, and the result is then blended to the image buffer. This sequence is repeated until the ray termination criteria is reached (Sec. 4.5).
Storage:
Because the voxel grid is sparse, we need to store only: indices and values of feature vectors for non-empty voxels; a binary occupancy mask; and the MLP weight. At inference, we keep feature vectors in a 1D array and then build a dense 3D array that stores the indices to the specific feature value, thereby reducing GPU memory demand without much sacrifice of performance.
5.2 Offline rendering
We evaluate the offline model by measuring the similarity between rendered and ground truth images using PSNR, SSIM [1284395], and LPIPS [zhang2018perceptual]. We use our DIVeR64 model for all scenes.
Synthetic NeRF | |||
---|---|---|---|
NeRF [mildenhall2020nerf] | PlenOctrees [yu2021plenoctrees] | DIVeR32(RT) | Ground Truth |
Baselines:
We compare with original NeRF [mildenhall2020nerf]; the reimplementation in Jax [jaxnerf2020github]; AutoInt [autoint2021]; and NSVF [liu2020neural] (which uses similar voxel grid features). Pre-trained models from real-time NeRF variants produce good rendering quality but are trained and evaluated on very large computational resources (for example, JaxNeRF+ [hedman2021baking] doubles the feature size of NeRF’s MLP and takes 5 times more samples for volume rendering, which is impractical for evaluation on a standard GPU). Therefore, we exclude them from the baseline models.
Results:
DIVeR rendering quality is comparable with other offline baselines, while its architecture is much simpler (Tab. 1). Our PSNR is only slightly worse than that of NSVF on Tanks and Temples; but we use a much simpler decoder MLP.
Tanks and Temples | BlendedMVS | ||
---|---|---|---|
DIVeR64 | Ground Truth | DIVeR64 | Ground Truth |
Method | PSNR | SSIM | LPIPS |
---|---|---|---|
NeRF-SH [yu2021plenoctrees] | 31.57 | 0.952 | 0.063 |
JaxNeRF+ [hedman2021baking] | 33.00 | 0.962 | 0.038 |
NeRF [mildenhall2020nerf] | 31.00 | 0.947 | 0.081 |
DIVeR32 | 32.16 | 0.958 | 0.032 |
Method | PSNR | SSIM | LPIPS | FPS | MB | GPU GB |
---|---|---|---|---|---|---|
PlenOctrees [yu2021plenoctrees] | 31.71 | 0.958 | 0.053 | 7666 | 1930 | 1.651.09 |
SNeRG [hedman2021baking] | 30.38 | 0.950 | 0.050 | 9837 | 84 | 1.731.48 |
FastNeRF [garbin2021fastnerf] | 29.97 | 0.941 | 0.053 | - | - | - |
KiloNeRF [reiser2021kilonerf] | 31.00 | 0.950 | 0.030 | 2812 | 161 | 1.680.27 |
DIVeR32(RT) | 32.12 | 0.958 | 0.033 | 4720 | 68 | 1.070.06 |
5.3 Real-time rendering
For the real-time rendering task, we use our DIVeR32 model with the inference time optimization described in Sec. 4.5. Besides rendering quality, we show the inference time efficiency by running all the models on a GTX1080 GPU and recording their FPS and GPU memory usage. To compare the compactness of the architecture, we report the average memory usage for storing a scene. As most real-time models are converted from some pre-trained models, we also show the rendering quality of those models and compare the precision loss after the conversion. For the models that have variants based on quality-speed trade-off, we report their variants with the best rendering quality.
Baselines:
For our real-time rendering baselines, we compare with PlenOctrees [yu2021plenoctrees], SNeRG [hedman2021baking], FastNeRF [garbin2021fastnerf], and KiloNeRF [reiser2021kilonerf]. Since SNeRG did not provide their pre-trained models, we directly report the measurements from their paper. For the same reason, we report only the rendering quality for the FastNeRF as reported in their paper. For KiloNeRF, we measure the performance of those scenes for which their model was made available (chair, lego, and ship).
Results:
Our rendering quality is either best (Tab. 3) or second best (Tab. 2), but our method achieves very high frame rates for very small models. All other methods must (a) convert to achieve a real-time form then (b) fine-tune to recover precision loss after conversion. Fine-tuning is crucial for these models; for example, if SNeRG is not fine-tuned, its PSNR degrades dramatically to 26.68. In contrast, our model is evaluated as trained without conversion or fine-tuning. Early ray termination (Sec. 4.5) causes the mild degradation in quality observed in our real-time methods.
5.4 Ablation study
Decoder | RT | PSNR | FPS | MB | |
---|---|---|---|---|---|
256 | DIVeR64 | No | 32.32 | 0.62 | 62 |
256 | DIVeR32 | No | 32.16 | 0.62 | 68 |
128 | DIVeR64 | No | 30.72 | 0.62 | 12 |
128 | DIVeR32 | No | 30.53 | 0.62 | 12 |
256 | DIVeR64 | Yes | 32.30 | 269 | 62 |
256 | DIVeR32 | Yes | 32.12 | 4720 | 68 |
128 | DIVeR64 | Yes | 30.64 | 3720 | 12 |
128 | DIVeR32 | Yes | 30.52 | 8237 | 12 |
DIVeR64 | DIVeR32 | DIVeR64 | Ground Truth |
Architecture:
We perform all our ablation studies on the NeRF-synthetic dataset. In Tab. 4, we show the performance trade-off between different network architectures. Without any real-time optimization, our model still runs faster than regular NeRF that takes minutes to run a single frame; if we use a smaller decoder for speed, there is a minor loss of quality but speed doubles (because DIVeR32 uses half as many registers as DIVeR64, allowing more threads to run in the CUDA kernel). Further economy with acceptable PSNR can be obtained by reducing the voxel grid size (compare DIVeR32 at 128 voxels yielding 30.42 PSNR for about 12MB model to PlenOctree’s variant with 30.7 PSNR, about 400MB). Fig. 9 compares different MLP sizes and voxel grid sizes qualitatively.
Scene composition | Object swapping |
---|
Integrator | Regularization | Data type | PSNR | MB |
---|---|---|---|---|
Det | Im-Ex | float32 | 35.52 | 64 |
Rand | Im-Ex | float32 | 33.89 | 67 |
Det | Im-Ex | uint8 | 35.44 | 19 |
Det | Im | float32 | 34.69 | 64 |
Det | Ex | float32 | 34.02 | 64 |
Training strategy:
Tab. 5 shows the effect of different training strategies on the lego scene trained with our DIVeR64 model. The deterministic integrator is important: using a random integrator (implemented with sampling strategy of NSVF [liu2020neural]) in train and test causes a notable loss of quality. Implicit-explicit training strategy is important: replacing it with either pure implicit model or with no implicit MLP initialization (compare Fig. 3) results in a less significant loss of quality. A lower precision representation of the feature vectors (trained with a tanh mapping; converted to unit8) results in a minor loss of quality, but the model size is reduced by a factor of 3.
Editability:
The voxel based representation allows us to perform some basic scene manipulations. We can composite scenes by blending their voxel grids then using the corresponding decoder for rendering. Because feature vectors incorporate high level information of the local appearance, we can extract the segmentation of an object from a selected area by using k-mean clustering on the feature vectors, which allows us to swap objects without noticeable artifacts. Fig. 10 shows some examples.
6 Limitations
Ours | Ground Truth | Ours | Ground Truth |
NeRF [mildenhall2020nerf] | Ours | Ground Truth |
Aliasing error in a deterministic integrator tends to be patterned, whereas a stochastic integrator breaks it up [cookstoch]. In turn, rays near tangent to voxels or accumulated error in the intersection routine can cause problems (Fig. 11). A mixed stochastic-deterministic method (say, jittering voxel positions) may help. Our method, like NeRF, can fail to model view dependent effects correctly (Fig. 12); more physical modeling might help. Our method doesn’t currently apply to unbounded scenes, and our editing abilities are currently quite limited.
Acknowledgements:
We thank Sara Aghajanzadeh, Derek Hoiem, Shenlong Wang, and Zhen Zhu for their detailed and valuable comments on our paper.