Print Email Facebook Twitter Self-supervised Audio-reactive Music Video Synthesis Title Self-supervised Audio-reactive Music Video Synthesis: Measuring and optimizing audiovisual correlation Author Brouwer, Hans (TU Delft Electrical Engineering, Mathematics and Computer Science) Contributor Chen, Lydia Y. (mentor) Liem, C.C.S. (graduation committee) Degree granting institution Delft University of Technology Programme Computer Science Date 2022-06-29 Abstract Synthesizing audio-reactive videos to accompany music is challenging multi-domain task that requires both a visual synthesis skill-set and an understanding of musical information extraction. In recent years a new flexible class of visual synthesis methods has gained popularity: generative adversarial networks. These deep neural networks can be trained to reproduce arbitrary images based on a dataset of about 10000 examples. After training, they can be harnessed to synthesize audio-reactive videos by constructing sequences of inputs based on musical information.Current approaches suffer from a few problems which hamper the quality and usability of GAN-based audio-reactive video synthesis. Some approaches consider only a small number of possible musical inputs and ways of mapping these the GAN's parameters. This leads to weak audio-reactivity which has a similar motion characteristic across all musical inputs. Other approaches do harness the full design space, but are difficult to configure correctly for effective results.This thesis aims to address the tradeoff between audio-reactive flexibility and ease of attaining effective results. We introduce multiple algorithms that explore the design space by using machine learning to generate sequences of inputs for the GAN.To develop these machine learning algorithms, we first introduce a metric, the audiovisual correlation, that measures the audio-reactivity in a video. We use this metric to train models based only on a dataset of audio examples, avoiding the need of a large dataset of example audio-reactive videos. This self-supervised approach can even be extended to optimize a single audio-reactive video directly, removing the need to even train a model beforehand.Our evaluation of the methods shows that our algorithms out-perform prior work in terms of their audio-reactivity. Our solutions explore a wider range of the audio-reactive space and do so without the need for manual feature extraction or configuration. Subject deep learningGenerative Adversarial Networksaudio-reactivevideo synthesisself-supervised learningmultimodal machine learningaudiovisual correlation To reference this document use: http://resolver.tudelft.nl/uuid:4f9c0a36-2884-43e8-a135-9e4b90c77fd2 Bibliographical note https://jcbrouwer.github.io/thesis/supplement/ Supplementary material https://github.com/JCBrouwer/self-supervised-audio-reactive Part of collection Student theses Document type master thesis Rights © 2022 Hans Brouwer Files PDF Hans_Brouwer_Self_supervi ... thesis.pdf 9.65 MB Close viewer /islandora/object/uuid:4f9c0a36-2884-43e8-a135-9e4b90c77fd2/datastream/OBJ/view