Generative autoregressive networks for 3d dancing move synthesis from music

Journal Article (2020)
Author(s)

Hyemin Ahn (Seoul National University)

Jaehun Kim (TU Delft - Multimedia Computing)

Kihyun Kim (Seoul National University)

Songhwai Oh (Seoul National University)

Multimedia Computing
DOI related publication
https://doi.org/10.1109/LRA.2020.2977333
More Info
expand_more
Publication Year
2020
Language
English
Multimedia Computing
Issue number
2
Volume number
5
Pages (from-to)
3501-3508

Abstract

This letter proposes a framework which is able to generate a sequence of three-dimensional human dance poses for a given music. The proposed framework consists of three components: A music feature encoder, a pose generator, and a music genre classifier. We focus on integrating these components for generating a realistic 3D human dancing move from music, which can be applied to artificial agents and humanoid robots. The trained dance pose generator, which is a generative autoregressive model, is able to synthesize a dance sequence longer than 1,000 pose frames. Experimental results of generated dance sequences from various songs show how the proposed method generates human-like dancing move to a given music. In addition, a generated 3D dance sequence is applied to a humanoid robot, showing that the proposed framework can make a robot to dance just by listening to music.

No files available

Metadata only record. There are no files for this record.