Contact person: Teresa Hirzle (tehi@di.ku.dk

Internal Partners:

  1. Københavns Universitet (UCPH), Teresa Hirzle
  2. Ludwig-Maximilians-Universität München (LMU), Florian Müller/Julian Rasch  

External Partners:

  1. Saarland University, Martin Schmitz  

 

The use of generative AI in the creation of 3D objects has the potential to greatly reduce the time and effort required for designers and developers, resulting in a more efficient and effective creation of virtual 3D objects. Yet, research still lacks an understanding of suitable interaction modalities and common grounding in this field.

Objective:

The objective of this research project is to explore and compare interaction modalities that are suited to collaboratively create virtual 3D objects together with a generative AI. To this end, the project aims to investigate how different input modalities, such as voice, touch and gesture recognition, can be used to generate and alter a virtual 3D object and how we can create methods for establishing common ground between theAI and the users.

Methodology:

The project is split into two working packages. (1) We investigate and evaluate the use of multi-modal input modalities to alter the shape and appearance of 3D objects in virtual reality (VR). (2) Based on our insights on promising multi-modal interaction concepts, we then develop a prototypical multi-modal VR interface that allows users to collaborate on the creation of 3D objects with a generative AI. This might include, but is not limited to, the AI assistant generating 3D models (e.g. using https://threedle.github.io/text2mesh or Shap-E ) or providing suggestions based on the users’ queries. The project will use a combination of experimental and observational methods to evaluate the effectiveness and efficiency of the concepts. This will involve conducting controlled experiments to test the effects of different modalities and AI assistance on the collaborative creation process, as well as observing and analyzing the users’ behavior.

Results Summary

The project was conducted and finished and a paper submission in a top-tier conference is currently under review. Thus, the authors requested the obscuring of results for now due to the strict anonymization policies of the venue. An abstract version of the results is that the project contributed insights into the effectiveness and efficiency of different modalities and AI assistance in enhancing the collaborative process, and guidelines for the design of multi-modal interfaces and AI assistance for collaborative creation of 3D objects.