DesignMinds

Enhancing Video-Based Design Ideation with Vision-Language Model and Context-Injected Large Language Model

Preprint (2024)
Author(s)

T. He (TU Delft - Knowledge and Intelligence Design)

A. Stanković (TU Delft - Industrial Design Engineering)

E. Niforatos (TU Delft - Knowledge and Intelligence Design)

Gerd Kortuem (TU Delft - Knowledge and Intelligence Design)

Knowledge and Intelligence Design
More Info
expand_more
Publication Year
2024
Language
English
Knowledge and Intelligence Design
Publisher
ArXiv

Abstract

Ideation is a critical component of video-based design (VBD), where videos serve as the primary medium for design exploration and inspiration. The emergence of generative AI offers considerable potential to enhance this process by streamlining video analysis and facilitating idea generation. In this paper, we present DesignMinds, a prototype that integrates a state-of-the-art Vision-Language Model (VLM) with a context-enhanced Large Language Model (LLM) to support ideation in VBD. To evaluate DesignMinds, we conducted a between-subject study with 35 design practitioners, comparing its performance to a baseline condition. Our results demonstrate that DesignMinds significantly enhances the flexibility and originality of ideation, while also increasing task engagement. Importantly, the introduction of this technology did not negatively impact user experience, technology acceptance, or usability.

No files available

Metadata only record. There are no files for this record.