Real-time Material Estimation (WiSe 2023/24)
There are three main components contributing to the content of an image captured by a camera, namely the lighting conditions, the surface geometry of the observed objects, as well as their material properties. Inverse rendering, as a central task of computer vision, aims to reconstruct these three components that fully describe a 3D scene by only relying on 2D images as input.
The focus of this project falls onto the reconstruction of material properties. The task of reconstructing material properties is non-trivial in the absence of known (or perfectly reconstructed) scene geometry and illumination. On the other hand, objects of unknown general reflectance present a challenge when it comes to geometry reconstruction. To handle these codependent problems, researchers have proposed joint iterative optimization frameworks which are able to simultaneously estimate all three scene parameters. These solutions produce high-quality results at the cost of long computation time.
The aim of the project is to build a system, which can estimate the material properties of scene objects with (approximately) known geometry in real-time. The desired system should be able to handle moderately complex scenes containing multiple objects of diverse non-homogenous materials. The applied reflectance model should be chosen carefully such that it can be evaluated in real-time but can also allow for plausible representation of complex materials. As an example for such reflection models, appearance properties such as anisotropic reflection, sheen or subsurface scattering can be represented by the Disney BRDF, whereas the visually much simpler Phong model is computationally very efficient.
Ultimately, we are searching for a good trade-off between required computational effort and highest possible visual fidelity of the reconstructed multi-object, multi-material 3D scenes.