DesignMinds

Enhancing Video-Based Design Ideation with Vision-Language Model and Context-Injected Large Language Model

Preprint (2024)
Author(s)

T. He (TU Delft - Internet of Things)

Andrija Stanković (TU Delft - Industrial Design Engineering)

E. Niforatos (TU Delft - Internet of Things)

G.W. Kortuem (TU Delft - Internet of Things)

Internet of Things
More Info
expand_more
Publication Year
2024
Language
English
Internet of Things

Abstract

Ideation is a critical component of video-based design (VBD), where videos serve as the primary medium for design exploration and inspiration. The emergence of generative AI offers considerable potential to enhance this process by streamlining video analysis and facilitating idea generation. In this paper, we present DesignMinds, a prototype that integrates a state-of-the-art Vision-Language Model (VLM) with a context-enhanced Large Language Model (LLM) to support ideation in VBD. To evaluate DesignMinds, we conducted a between-subject study with 35 design practitioners, comparing its performance to a baseline condition. Our results demonstrate that DesignMinds significantly enhances the flexibility and originality of ideation, while also increasing task engagement. Importantly, the introduction of this technology did not negatively impact user experience, technology acceptance, or usability.

No files available

Metadata only record. There are no files for this record.