Siri AI

Role //

Product Designer, Frontend Developer


Duration //

Jan 2025 - Present

In this project, I look at how generative AI can improve CUI (Conversational User Interface) interactions, specifically Siri. The experience was designed in Figma and developed using Next.js for the frontend, with generative capabilities powered by the OpenAI API.



Github

Problem Space

Currently, CUIs struggle with handling non-command user requests. When faced with such inputs, their responses are often no more advanced than a standard online search, providing results similar to what a human could find independently.

This presents an opportunity for generative AI to offer users actionable insights and tailored responses beyond simple search results.

Command prompts for Siri

Use Cases

01: Multi-App Experiences

When humans search for information, they typically do so sequentially—switching from one app to another. Generative AI, on the other hand, has the potential to query and coordinate across multiple apps in parallel.

For instance, when learning to bake a cake, Siri might place the recipe in the Notes app, set a series of timers in the Timer app, and queue up a cooking playlist in Spotify—all at once.

Mini-App Journey

Design

I wanted users to interact dynamically with the generated content, enabling actions like regenerating, editing, and rearranging it to refine responses.

Tiles, Prompt Box, and Input Box Components

Tile Layout

Tile Resizing

Prototyping

To start the project, I began by exploring the capabilities of generative AI, experimenting with various prompts to generate the most detailed responses. I developed the following prototypes to understand how to integrate a generative AI tool with the fronten using Next.js and the OpenAI API.

01: Event Generation

I started by experimenting with how well the OpenAI API could generate steps and action items.

Event Generation Prototypes

02: Regeneration

When prototyping for regeneration, I thought about how users currently interact with Generative AI tools. I played around with cases where users would want to:

1. Completely change the generated content
2. Add/Remove from the generated content

With a bit of prompt engineering here is the end result:

Regeration protype when asked to change directions and add

03: Grouping and Organizing Content

I then moved to exploring how the OpenAI API can help with organizing and grouping the generated information into a left, middle, and right column.

Tiling prototype when asked to bake a cake

Dashboard UX

After generating the tiled app views, I recognized that the interface was naturally evolving into a dynamic dashboard. This realization prompted me to explore how tile resizing and the addition of new columns could enhance the layout and overall user experience.

01: App Tiles

The variety of apps presented an opportunity to make each tile distinct and tailored to its functionality.

For example, I designed the Notes tile to be editable, allowing users to modify content directly within the tile. Regenerating results would then pull from the updated input, creating a more dynamic and personalized experience.

Notes Tile

02: Tile Resizing

To enhance interactivity, I introduced an active state for individual tiles. When a user engages with a specific tile, it highlights with a distinct border color and expands vertically. This helps users see their edits in context, emphasizing how their input contributes to the broader dashboard experience.

Tile Resizing

03: Adding Columns

To accommodate growth, users can add an additional column of apps to their dashboard—a scaled-down implementation of future functionality. This sets the foundation for a more customizable experience, where users will eventually be able to generate and specify the types of apps they want to include.

Adding Column to Dashboard

04: Loading Screen

To provide users with clear feedback during result generation, I designed a loading experience inspired by Apple’s AI design system.

Loading