Computational Photography,
ein Vortrag von Ramesh Raskar, Mitsubishi Electric Research Laboratories Boston, USA
Mittwoch, 22. November 2006, Bauhausstr. 11, R 013
(The talk will be streamed live from Boston)
*Abstract*
Computational photography combines plentiful computing, digital sensors, modern optics, actuators, and smart lights to escape the limitations of traditional film cameras and enables novel imaging applications. Unbounded dynamic range, variable focus, resolution, and depth of field, hints about shape, reflectance, and lighting, and new interactive forms of photos that are partly snapshots and partly videos are just some of the new applications found in Computational Photography.
In traditional film-like digital photography, camera images represent a view of the scene via a 2D array of pixels. Computational Photography attempts to understand and analyze a ray-based representation of the scene. The camera optics encode the scene by bending the rays, the sensor samples the rays over time, and the final 'picture' is decoded from these encoded samples. The lighting (scene illumination) follows a similar path from the source to the scene via optional spatio-temporal modulators and optics. In addition, the processing may adaptively control the parameters of the optics, sensor and illumination.
The encoding and decoding process differentiates Computational Photography from traditional 'film-like digital photography'. With film-like photography, the captured image is a 2D projection of the scene. Due to limited capabilities of the camera, the recorded image is a partial representation of the view. Nevertheless, the captured image is ready for human consumption: what you see is what you almost get in the photo. In Computational Photography, the goal is to achieve a potentially richer representation of the scene during the encoding process. In some cases, Computational Photography reduces to 'Epsilon Photography', where the scene is recorded via multiple images, each captured by epsilon variation of the camera parameters. For example, successive images (or neighboring pixels) may have a different exposure, focus, aperture, view, illumination, or instant of capture. Each setting allows recording of partial information about the scene and the final image is reconstructed from these multiple observations. In other cases, Computational Photography techniques lead to “Coded Photography” where the recorded image appears distorted or random to a human observer. But the corresponding decoding recovers valuable information about the scene. I will describe several projects in Epsilon and Coded Photography.
*Biography*
Ramesh Raskar is a Senior Research Scientist at MERL. His research interests include projector-based graphics, computational photography and non-photorealistic rendering. His published articles in imaging and photography include topics in multi-flash photography for depth edge detection, image fusion, gradient-domain imaging and projector-camera systems.
Dr. Raskar received the TR100 Award in 2004 which recognizes top 100 innovators under 35 worldwide, Global Indus Technovator Award at MIT in 2003 which recognizes the top 20 Indian technology innovators worldwide, Mitsubishi Electric Invention Awards in 2003, 2004 and 2006.
Wechsel zwischen Farb- und Schwarz-Weiß-Ansicht
Kontrastansicht aktiv
Kontrastansicht nicht aktiv
Wechsel der Hintergrundfarbe von Weiß zu Schwarz
Darkmode aktiv
Darkmode nicht aktiv
Fokussierte Elemente werden schwarz hinterlegt und so visuell hervorgehoben.
Feedback aktiv
Feedback nicht aktiv
Beendet Animationen auf der Website
Animationen aktiv
Animationen nicht aktiv