Capturing and grouping SDR frames containing sections of HDR from a video feed to artificially expand the dynamic range of SDR screens

Bachelor Thesis (2021)
Author(s)

R.R. Schreuder (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

R.T. Wiersma – Mentor (TU Delft - Computer Graphics and Visualisation)

Elmar Eisemann – Graduation committee member (TU Delft - Computer Graphics and Visualisation)

Apostolis Zarras – Coach (TU Delft - Cyber Security)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2021 Rinke Schreuder
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 Rinke Schreuder
Graduation Date
01-07-2021
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

In this paper, a method is proposed to artificially expand the dynamic range of screens with a limited dynamic range. This research is linked to a new film-making technology where, instead of using a green screen, the background of a scene is displayed on a screen in real time using a computer generated background. This provides real time lighting in the studio; however, due to the limited dynamic range of the screen, it can not fully replicate the brightness of light sources. Overcoming this problem involves capturing and synchronize frames that each display a small section of the wider dynamic range, defined as illumination maps. The method uses a pipeline in which the illumination maps are displayed on a monitor in a grouped order, which are then captured with a camera. The recording is processed by labeling the frames and selecting key frames. The key frames are then additively combined with compatible illumination maps, which result in a video of the full dynamic range.
A program was developed as a proof of concept, providing expected results. For various recording inputs, It was also found that the implemented program discarded a lot of the frames of the recordings. A variation of the proposed method also yielded a slight speed-up, for practically the same results.
The proposed method provides a good starting point tackling the problem of artificially extending the dynamic range. The program used is a step in the right direction, but has flaws that limit its usefulness.

Files

License info not available