Print Email Facebook Twitter Photo2Video Title Photo2Video: Semantic-Aware Deep Learning-Based Video Generation from Still Content Author Viana, Paula (INESC TEC (Formerly INESC Porto); Polytechnic of Porto) Andrade, Maria Teresa (INESC TEC (Formerly INESC Porto); Universidade do Porto) Carvalho, Pedro (INESC TEC (Formerly INESC Porto); Polytechnic of Porto) Vilaça, Luis (INESC TEC (Formerly INESC Porto)) Teixeira, Inês N. (INESC TEC (Formerly INESC Porto)) Costa, Tiago (INESC TEC (Formerly INESC Porto)) Jonker, P.P. (TU Delft Biomechatronics & Human-Machine Control; QdepQ Systems B.V.) Date 2022 Abstract Applying machine learning (ML), and especially deep learning, to understand visual content is becoming common practice in many application areas. However, little attention has been given to its use within the multimedia creative domain. It is true that ML is already popular for content creation, but the progress achieved so far addresses essentially textual content or the identification and selection of specific types of content. A wealth of possibilities are yet to be explored by bringing the use of ML into the multimedia creative process, allowing the knowledge inferred by the former to influence automatically how new multimedia content is created. The work presented in this article provides contributions in three distinct ways towards this goal: firstly, it proposes a methodology to re-train popular neural network models in identifying new thematic concepts in static visual content and attaching meaningful annotations to the detected regions of interest; secondly, it presents varied visual digital effects and corresponding tools that can be automatically called upon to apply such effects in a previously analyzed photo; thirdly, it defines a complete automated creative workflow, from the acquisition of a photograph and corresponding contextual data, through the ML region-based annotation, to the automatic application of digital effects and generation of a semantically aware multimedia story driven by the previously derived situational and visual contextual data. Additionally, it presents a variant of this automated workflow by offering to the user the possibility of manipulating the automatic annotations in an assisted manner. The final aim is to transform a static digital photo into a short video clip, taking into account the information acquired. The final result strongly contrasts with current standard approaches of creating random movements, by implementing an intelligent content-and context-aware video. Subject Automated content creationContext awarenessDeep learningRoISemantic awarenessStorytelling To reference this document use: http://resolver.tudelft.nl/uuid:d440d3fc-a010-4550-8ffe-65c136162079 DOI https://doi.org/10.3390/jimaging8030068 ISSN 2313-433X Source Journal of Imaging, 8 (3) Part of collection Institutional Repository Document type journal article Rights © 2022 Paula Viana, Maria Teresa Andrade, Pedro Carvalho, Luis Vilaça, Inês N. Teixeira, Tiago Costa, P.P. Jonker Files PDF jimaging_08_00068.pdf 6.94 MB Close viewer /islandora/object/uuid:d440d3fc-a010-4550-8ffe-65c136162079/datastream/OBJ/view