Web Development, Creative Coding, Interaction Design
React.js, P5.js, Google Teachable Machine, ML5.js
This project is a collection of interaction design experiments presented through the form of a game. Using the Google Teachable Machine and the ML5.js libraries, I was able to gain a deeper understanding of how to incorporate interactions into designs—if there is a better method to design interactions for the Human-Processing-Model.
The purpose of this website is to educate designers on machine learning libraries and methods to design for accessible interactions —a keyboard is often hard to use for those with disabilities that affect mobility, and hardware designed for accessibility is often expensive and uncomfortable. The information I am conveying on this website is a variety of experimental interactions that users can activate using their hands, faces, random objects around them, and sound. Users are encouraged to experiment with how to use the interactions to:
1. Set a color to a module.
2. Move across the grid.
Shifting away from using the keyboard and mouse, I wanted to utilize sounds and hand signals to interact with the computer—thus, the main forms of interaction were webcam, speakerphone, and audio. These experiment look at how other modes of interaction can heighten—or complicate—a user’s experience navigation through an interface.
When the user moves to a block, they have the option of coloring the block blue, purple, or pink using different hand gestures. I used the ML5.js handpose library to be able to detect the number of fingers being held up and connect them with different colors they would represent.
Model in action
I used the Google Teachable Machine to incorporate auditory interactions in my website, training the models to be able to detect knocks, crinkling of paper, and claps.
Training Google Teachable Machine to recognize sounds
Model in action
Thinking about different ways users can use a webcam to assign interactions, I decided to have the ML5.js objectDetection library detect different items that users may have near them to assign colors to modules on the grid.
Demonstration of how object detection interaction functions. First assign objects to color, then use objects to color modules.
Users show different items for the objectDetection library to detect different objects and allocate them to assigning colors and deleting colors.
With the help of the Ml5.js library, when the user communicates whether they want to go up, down, left, or right, the interface detects the sound and moves on the grid accordingly.
Audio interaction in action
Thinking about different ways that you can go about controlling the directions, I wanted to think about other ways that I came up with the idea where I would use my face as controller, using the ML5.js faceMesh library I was able to track when my face was turning left, right, up, and down.
Face mesh for opening mouth, turning head left and right, and moving head up and down.
Demonstration of how a face may be used to control movement throughout the grid
To detect when a user turned their head right, left, up, and opened their mouth to move down, I found the points in the Facemesh library that corresponded to the left and right of the cheeks, the forehead, and the mouth and found the distance of those points to the nose—when the distance passes or goes under a certain value, the grid would move.
Since many of the modes to interact with the interface of experimental, a big part of designing the interface was aimed at educaating users on the purpose of the interactions and the process of onboarding users.
With the addition of a home page, individual descriptions to describe the interaction and libraries used, and pop-ups to onboard users, users will understand the purposes of each interaction and how it can be used to design for accessibility.
Original Figma mockups
These different experiments allowed me to understand how I could implement different interactions into an interface, and how they could make my interaction design both more intuitive and usable for the user. The process of onboarding was especially hard to design for, since many of these interactions are experimental, and users are not expected to know how to interact with the interface on first use.
Moving forward, I would like to move past relying on the camera and speaker to be the primary hardware that detects interactions and explore how physical objects can be used in interaction design.