2024
SHAPE-IT
Exploring Text-to-Shape-Display for Generative Shape-Changing Behaviors with LLMs
Wanli (Michael) Qian*, Chenfeng (Jesse) Gao*, Anup Sathya, Ryo Suzuki, Ken Nakagaki
About
This paper introduces text-to-shape-display, a novel approach to generating dynamic shape changes in pin-based shape displays through natural language commands. By leveraging large language models (LLMs) and AI-chaining, our approach allows users to author shape-changing behaviors on demand through text prompts without programming. We describe the foundational aspects necessary for such a system, including the identification of key generative elements (primitive, animation, and interaction) and design require- ments to enhance user interaction, based on formative exploration and iterative design processes. Based on these insights, we develop SHAPE-IT, an LLM-based authoring tool for a 24 x 24 shape display, which translates the user’s textual command into executable code and allows for quick exploration through a web-based control in- terface. We evaluate the effectiveness of SHAPE-IT in two ways: 1) performance evaluation and 2) user evaluation (N= 10). The study conclusions highlight the ability to facilitate rapid ideation of a wide range of shape-changing behaviors with AI. However, the findings also expose accuracy-related challenges and limitations, prompting further exploration into refining the framework for leveraging AI to better suit the unique requirements of shape-changing systems.
The early work of this project was in collaboration with Richard Liu and Prof. Rana Hanocka from 3DL (UChicago CS), which can be found below page: https://www.axlab.info/projects/towards-multimodal-interaction-with-ai-infused-shape-changing-uis