Three collocated telepresence participants (including the camera) meet two remote participants in a shared virtual scene. Since the viewing perspectives of the local participants are similar, our output-sensitive avatar reconstruction approach allows for optimized remote avatar reconstruction, network bandwith utilization as well as reduced round trip-times for both parties in group-to-group telepresence scenarios.
Abstract
In this paper, we propose a system design and implementation for output-sensitive reconstruction, transmission and rendering of 3D video avatars in distributed virtual environments. In our immersive telepresence system, users are captured by multiple RGBD sensors connected to a server that performs geometry reconstruction based on viewing feedback from remote telepresence parties. This feedback and reconstruction loop enables visibility-aware level-of-detail reconstruction of video avatars regarding geometry and texture data, and considers individual and groups of collocated users. Our evaluation reveals that our approach leads to a significant reduction of reconstruction times, network bandwidth requirements and round-trip times as well as rendering costs in many situations.
Publication
Adrian Kreskowski, Stephan Beck, and Bernd Froehlich. 2022.
Output-Sensitive Avatar Representations for Immersive Telepresence
In IEEE Transactions on Visualization and Computer Graphics, vol. 28, no. 7, pp. 2697-2709, 1 July 2022. Presented at IEEE VR 2021, Virtual Event. IEEE Computer Society. DOI: 10.1109/TVCG.2020.3037360
[IEEE][preprint][video][IEEE VR 2021 Presentation Video]