Zenda Platform
Role //
Product Designer, Frontend Developer
Duration //
May 2024 - May 2025
In the summer of 2024, I was hired by Zenda Consulting as a Design Engineer. I worked on the product team, designing and developing a new software system/product.
During my time at Zenda, I worked on:
- Designing and implementing generative AI features into the product
- Conducting in-home product research for prompt engineering and feature testing
- Scaling the product to accommodate increased software complexity
User Testing Prototypes
One of my key responsibilities was developing a series of prototypes to explore how to deliver the most effective context to a large language model (LLM) for actionable and relevant outputs. This involved analyzing user habits when interacting with generative AI and experimenting with different ways of structuring and prompting the context to optimize the tool’s performance.
To evaluate the quality of the generated results, I conducted user testing sessions with consultants—the primary users of the tool—on the services side of the company. I monitored outcomes using Sentry and tracked which prompts and context structures led to the most successful and actionable results.
01: Testing Technologies
To start, our goal was to evaluate whether LLMs could generate high-quality, relevant outputs and determine their potential for future refinement. The team initially envisioned a one-shot generation tool, with plans to evolve it into a more advanced system powered by Retrieval-Augmented Generation (RAG).
Prototype 1: Testing genAI capabilities
02: Effective Context Detailing
As the project progressed, the team focused on identifying which types of context details would make generated results more tailored to individual users. To support this, I implemented input fields that allowed participants to specify contextual information during testing, enabling us to observe how different inputs influenced the quality of the output.
Prototype 2: Adding more potential context details
03: Final Testing Prototype
I ultimately split the prototype into two key components: one section where participants could choose from preset context details or input their own, and another section that served as the main UI for the prompt box. This structure allowed us to isolate and evaluate how different context configurations impacted the effectiveness of the generated responses.
All user interactions and generated results were logged into Sentry, enabling the product team to later analyze which combinations of context and prompts led to more effective outputs—and why certain generations performed better than others.
Prototype 3: Seperating context and prompt box UI for user testing