top of page
genie text logo.png
Creating a framework for complex multimodal interactions

A programming framework that uses object-oriented state abstraction approach to seamlessly support building multimodal interfaces for developers.

ReactGenie Thumbnail large.png

Role

Research Assistant

Duration

Oct 2022 - June 2023

Team

Jackie Yang
Daniel Wan Rosli
Shuning Zhang
Yuhan Zhang
Monica S. Lam
James A. Landay

Tools

TypeScript
Qualtrics
Figma

Research Overview

Developing multimodal interfaces is often challenging for software developers, particularly when handling voice interactions manually. This complexity increases both development time and costs while limiting user expressiveness.

ReactGenie offers a flexible framework designed to simplify the creation of complex multimodal applications. By translating user commands into Natural Language Programming Language (NLPL), ReactGenie enables developers to build sophisticated apps more efficiently. Our study revealed that 12 developers were able to build a functional ReactGenie app in just under 2.5 hours. Additionally, users completed tasks more quickly and with lower cognitive load compared to traditional GUI-based apps.

What I did

Working with Stanford PhD student Jackie Yang, I co-developed a software framework tailored for React developers. My role focused on assisting with the development of ReactGenie and building a sample food-ordering app using the framework. We also conducted an elicitation study to identify implementation gaps and refine the app's design for a better user experience.

Learn More

Access the Research Paper on arXiv.
Explore the
ReactGenie Code Repository on GitHub.

Virgil Case Study
Zeal Case Study
bottom of page