photo rotationphoto rotationphoto rotationphoto rotationphoto rotationphoto rotationphoto rotationphoto rotationphoto rotationphoto rotationphoto rotation

Ad Recomendation System Redesign

Role //

UX Researcher, Designer


Duration //

January—August 2023


Tools //

Adobe CC, Figma, UX Research Methods

With current advancements in AI, AI has found itself in the world of ad recommendations on digital platforms. Working with a team, I looked at how we can use research and evaluation methods to ultimately develop a design solution that helps aid the process of reporting malicious, biased ads.


cover image

Problem Space

Social media platforms have been using AI algorithms to personalize the content shown for each individual user. Although tailored recommendations and advertisements may be helpful, they have also resulted in instances of algorithmic bias. Our project looks at solutions to encourage users to take action against biased ads.

Background Research

The team first started by Walking the Wall. After we did research with some of the current issues with AI used in social media platforms, we were able to determine that:

1. Some platforms make the process of reporting ads extremely difficult with the use of misleading UI.

2. Much of the bias from current platforms corresponds to current societal biases—much of the bias from AI-algorithms stems demonstrate gender biases and socioeconomic biases.

Walking the Wall

Walking the Wall

Interviews and Surveys

After our research, I went out to conduct contextual interviews and surveys. I asked college students about how they approached biased ads on digital platforms, and their general thoughts on what they thought about ads. I was able to gather insights about design decisions and needs that helped make the process of reporting ads more intuitive.

Contextual Interview

Contextual Interview

With the help of Affinity Diagramming, I was able to make many more specific insights about the user’s emotions, interpretations, actions, and thoughts about design decisions when it came to reporting ads.

Affinity Diagramming

Contextual Interview

Something that I wanted to dig a little deeper into was the emotions during the process of reporting ads since it was one of the things that the team learned the most about during our research.

By creating an Empathy Map and Journey Map, I understood some of the frustrations that users may face when they are reporting ads:

1. They are worried about their privacy being invaded and being used for malicious purposes.

2. Users are generally not motivated to report ads—most users believe that the process of reporting ads sidetracks them from their original goals on the platform.

Empathy Map

Contextual Interview

Journey Map

Contextual Interview

Ideation

With a better understanding of the current pain points to the process of reporting ads and research into other digital platforms, the team decided we were ready to begin Storyboarding ideas using Crazy 8s. Each of us came up with 8 ideas that we could potentially pursue to address these problems.

Crazy 8s

Contextual Interview

Prototyping

Our final solution to combat the issues of lack of motivation to report ads and privacy concerns was to incorporate the ideas of collective action into our prototype.

I designed the Lo-Fi prototype and tested the prototype with interviewees. The idea behind the prototype was that after a certain amount of users report an ad, the ad would be taken down. Thus, users are encouraged to work with one another to “take down” biased ads. With this model, we believed that users would be more motivated than current models that are implemented to take action against malicious ads.



Link to Prototype

LoFi Prototype

Contextual InterviewContextual InterviewContextual Interview

After some user testing, the team realized that interviewees were more motivated to report ads with the new model.

However, there were still concerns with how complicated the process of reporting the ads was—interviewees believed that the process of having to choose a reason why the ad was biased was extremely time-consuming.

Thus, in our final model, I helped redesign and improve the prototype to be able to be navigated with as little reading and clicking as possible.



Link to Final Poster

Final Deliverable

Project Reflection

This project allowed me to gain a great amount of understanding to a variety of different research methods and evaluation methods that can be implemented to tackle large-scale problems, such as the use of AI with ad recommendations.

Working with a team, we were able to understand many of the different design considerations and ethical considerations when working with AI. We also learned how to design for a scenario when the user is angry and upset—after all, the last thing we would want to do is to anger the user even more.

And most importantly, I was able to understand many of the daunting issues with the powerful, yet mysterious advancements with AI.