I'm a
Designer

Overview
In the fast-paced world, people suffer from physical discomfort due to long periods of sitting and limited exercise, leading to issues like back pain and fatigue. MoveEase offer a solution by providing personalised relaxation exercises right at your desk. Using PoseNet web technology, it guides users through stretches and provides real-time feedback based on camera detection.
Before diving into the concept & process, I'd like to walk you through the experience videos first for better understanding of the interaction :)
This is the process of experiencing one of the yoga poses after she chose the practice level on her phone.
*Another level of poses
While she was doing the correct body pose, time is also counting down.
My roles
Researcher, UI Designer (group)
Design tools
Figma
Duration
April 2023 - May 2023

Background
research
Based on the latest data of Australian Bureau of Statistics (2022), 75% did not do any muscle and strength workouts in a week. By saying this, they have not yet met the daily standard of the required amount of physical activity WHO recommended.
These sitting diseases increased physical inactivity ultimately lead to a result of bowel cancer, dementia, coronary heart disease and breast cancer. On average, 5.2% of total death is reported due to physical inactivity (approx. 8253 deaths). COVID-19 has also brought extra tension to the issues of sitting disease; they are sitting even longer because of the previous work-from-home arrangement (Sharifi, 2022).
Problem
statement
The fact that people sitting for so long have detrimental effects on human health and result in a higher risk of many diseases (Australian Institute of Health and Welfare, 2023). As the World Health Organisation conveyed, adults aged between 18-64 should spend at least 75 minutes of aerobic physical exercise throughout the week. (Van Der Ploeg & Hillsdon, 2017).
Target
users
Those people who sit for a long time, such as officers, freelancers, or students. This also includes people who may be sub-healthy due to lack of exercise and sedentary conditions.
Our product is helpful for young people who are increasingly experiencing these problems due to a sedentary lifestyle, making it attractive to a wide age range and diverse user backgrounds.
User scenario
Users can select personalised stretching exercises tailored to their preferences and physical condition, guided by real-time feedback from a camera monitoring their progress. With notifications for unlocked activities and reminders for regular cool-down practices, MoveEase promotes instant relaxation and encourages a healthier, more active lifestyle, despite the constraints of modern work or study environments.
User
Flow

System
Architecture

User
Guide
1. Preparation and Set Up
.png)
2. The Input
.png)
3. The Output


Three
levels
1. Beginner
2. Intermediate
3. Advanced

.png)
.png)
Process
Documentation
2. The Input
Touch
MoveEase helps you meet life's health concerns
Ease of Interaction
MoveEase can be easily navigated: select their preferred relaxation mode, and perform suggested stretches. The usability of our product ensures all body conditions can comfortably get relaxed.
Personalised experience
Users can choose different levels based on their preferences. The body area map will target the key parts, ensure the experience meets particular user needs. Such personalisation also increases user engagement as well as user loyalty.
Engagement & Motivation
When users complete actions, they will receive encouraging text messages on their mobile. It helps keep users engaged and maintain healthier habits. Click the back button to choose other sports actions, enriching user choices.
Camera tracking
Identify and provide feedback using PoseNet
Real-time feedback
PoseNet's real-time pose estimation, paired with camera tracking, allows the system to monitor and analyze user movements during performance, providing immediate feedback to guide correct body pose adjustments.
User Engagement
The feature enhances MoveEase's interactivity, engaging users with real-time feedback and adaptive exercises. Its primary function is to offer a user-friendly platform for relaxation and stretching exercises
Gamification model
It incorporates gamification to boost user engagement in physical activities and offers new exercises based on user progress, enhancing personalization
Privacy
PoseNet runs locally on the user's device, so there is no need to send video data to the server, ensuring user privacy.
Input process documentation (Javascript)
1. JavaScript file & HTML tag


In our product, we run the sender through the mobile phone webpage, and we have three different action choices, corresponding to three processes and multiple pages. So when writing code, we choose to introduce external JavaScript files by adding the < script > tag in HTML. So as to load and execute the JavaScript code in the corresponding file. This method helps us to realize code modularity and code reuse in multiple pages.
2. Preload Assets

First of all, we pre-load the picture materials in the sketch.js file that need to be used in the interface corresponding to other js files.
3. Button setting

In order to simplify the code and make the interface beautiful, we choose to create buttons by loading pictures. The size and position of the button are controlled by the methods of button.size () and button.position (). In order to make the picture not deform greatly on different mobile phone screens, we try to use variables to control it. In addition, button.mouseClicked () is used to jump to other pages when the button is clicked.
4. Custom function

The function here corresponds to the previous button.mouseClicked (), which is used to set the button to jump to the corresponding page after being clicked. At the same time, the button in the previous page needs to be removed through button.move (), otherwise the button will appear in the subsequent page.
5. push(), pop() & imageMode()

In some interfaces, we use imageMode(CENTER) to control the position of pictures. In order to make this mode not affect other pictures, we use the methods of push () and pop () to solve it.
6. sendMessage()

In some functions related to page jumping, we added sendMessage (). At this time, an instruction will be sent to the receiver, and the code on the receiver will start running. After entering different Now pages through different Confirm buttons, different instructions will be sent to the receiver.
7. setTimeout()

In the Now page, the user only needs to complete the operation on the receiver, so there is no need to operate after entering this page. We use the method of setTimeout () to set the automatic jump, which will automatically jump to the next interface after 28 seconds, which is the same as the time for the user to complete the operation required at the receiver.
2. The Onput
Let's get started with our body relaxing journey!
Before pose detection starts
Seamless transition: Users are smoothly directed from their smartphone to the web, ensuring a seamless transition between devices.
Clear instructions: Users are given clear instructions to navigate to upcoming page and set up their laptop within a 8-second window. This helps them prepare for the detection process.
Engaging pose detection: Users engage in the pose detection process by assuming the correct pose as instructed. The activation of the timer in the upper right corner provides visual feedback, indicating that the process has started.
Pose duration and challenge: Users are required to maintain the correct pose for a challenging duration of 20 seconds. This provides a sense of achievement and encourages users to focus and improve their posture.
Completion feedback: Once the 20-seconds pose duration is completed, the user interface automatically directs users to the final page. This page serves as confirmation that they have successfully completed the pose and have learned the specific yoga pose they just performed.
Visual Graphics - the yoga poses
To enhance the experience, voiceover instruction is provided; instead of looking at the instruction on the screen.
We have implemented two voiceover in total:
1. “Please ensure you stand on the standing line and place your laptop on the ground to adjust the angle to the right position.”
Once users arrive the standby page, this first voiceover will play, providing users with an instruction to follow.
2. “It’s now under pose detection.”
After 8-second standby time has gone, they will be informed that their pose is being detected and do the instructed pose for 20 seconds.
The sound - voiceover for giving instruction
Output process documentation (Javascript)
1. Our Pre-trained Neural Network Model

We trained a set of models named modelc2 using the PoseNet model in ml5.js. We trained the five poses on the right sides in total:
The inclusion of the first two actions in the training is to ensure that the model does not mistake the user's initial state or a scene without a person as the target pose. We assigned numeric labels to these poses for easy data transmission through MQTT and convenient access to pose-related assets in subsequent programming tasks.
2. Pose Detection & Classification

We utilises the PoseNet model for real-time human pose estimation and a pre-trained neural network model for pose classification (Shiffman, 2020).
-
The PoseNet model is initialised using the poseNet object, and a callback function gotPoses is set to handle detected poses.
-
A neural network model is created using the ml5.neuralNetwork() function, and our pre-trained model ‘modelc2’ file is loaded.
-
In the classifyPose() function, the current frame's key points positions are used as inputs, and the neural network model is used to classify the pose. A callback function gotResult is called to handle the classification result.
-
In the gotResult() function, the pose label is determined based on the confidence of the classification result, and the classifyPose() function is called again to continue pose classification.

3. Calculate scaleFactor after the camera data is loaded

When the scaleFactor is calculated, the width and height of the video object can’t be fetched correctly because the camera's data may take some time to load. This can lead to inaccurate calculations of scaleFactor , causing the camera view to only be displayed in one corner of the canvas. To fix this, we moved the code to calculate scaleFactor into the modelLoaded() function. This ensures that the scale is calculated after the camera data has loaded.
4. Page Redirection & Once the page has changed, play a sound

This code snippet focuses on the page navigation. It contains the draw() function, which is continuously called in p5.js, and determines which page of the application to display based on the currentPage variable.
standBySoundPlayed is a boolean variable that tracks whether the standby sound has been played. Initially set to false, it is used to control the playback of the sound. When certain conditions are met, such as being on the designated page and standBySoundPlayed being false, the standby sound is played and standBySoundPlayed is set to true to prevent repetition. This variable ensures the sound is played only once when required.
6. Scaling and Flipping the Camera Image on the Canvas

By scaling the Camera Image with scaleFactor, positioning and transformation ‘translate()’ ,the code ensures that the camera image is appropriately scaled, positioned, and flipped to fit the canvas and provide an accurate representation of the camera.
7. Initialise the receiver by introducing variables

By scaling the Camera Image with scaleFactor, positioning and transformation ‘translate()’ ,the code ensures that the camera image is appropriately scaled, positioned, and flipped to fit the canvas and provide an accurate representation of the camera.
8. Pose Detection Timer & Countdown Bar
This code snippet shows how we manage the pose detection timer and displaying the countdown bar on the page. It tracks the time spent in the pose detection page and calculates the remaining time for maintaining a specific pose.


1. Timer and Page Transition:
-
The t variable calculates the time elapsed since entering the pose detection page.
-
If the elapsed time exceeds the predefined pagePoseDetectionTime, the current page is set to 4 (pageFeedback).
2. Pose Detection and State Management:
-
The code iterates over the poseType array to check if the detected poseLabel matches any of the predefined pose types. If a match is found, the currentPose is updated accordingly.
-
If the detected pose matches the poseCheck, and the poseState is false, the poseState is set to true, and the poseTimer is initialised.
-
If the current pose does not match the poseCheck, the poseState is set to false.
Pose Remaining Time and Countdown Bar Display:
-
If the poseState is true, the code checks if the elapsed time since entering the pose (millis() - poseTimer) is greater than or equal to 1000 milliseconds, and if poseRemain is false. If true, poseRemain is set to true, and poseRemainTimer is initialized.
-
The t2 variable calculates the remaining time for maintaining the pose, which is the difference between pagePoseDetectionTime and the time spent in the pose (poseRemainTimer - (timer + pagePoseDetectionTime)).
-
The t3 variable represents the current time spent in the pose (millis() - poseRemainTimer).
-
The remaining time is visualized using a rectangular progress bar, with the length of the bar (l) mapped based on the current time (t3) and the total remaining time (t2).