University of Pennsylvania
Neil Shah and Aasif Versi
Table of Contents
(Code is linked to Github)
People trying to lose weight often need to stick to strict diet regimens. They have to plan out their meals and timing. Unfortunately, since people lead busy lives at work and at home, they may not always be able to maintain their diet plans. As a result, they have a tendency to snack at different times throughout the day, and will not record these as meals in their diet.
When a person monitors his or her diet, they become more conscious of their food intake. As a result, they are often able to remove unnecessary caloric intake. However, without recording food intake, people will not be able to monitor their correct food intake. As a result, creating a way to make sure people are always up-to-date on their diet will make sure they can accurately determine their ability to stick to their diet plan. Therefore, we will be able to help people meet their weight goals. Ultimately, this should be used to help people whose weight is a root cause of many health ailments to manage their weight and become healthier in all aspects of their lives.
For our ESE 350 project our main goal was to be able to detect when a person was chewing and distinguish this from other activities that a person goes through in their daily lives. After the chewing was detected we wanted to give the user a feedback so that they would then know to keep track of their meal in order to keep an accurate calorie count. To reach these goals we chose to use Electromyography(EMG) and placed electrodes on the side of the user’s face to target the masseter muscle. After capturing these signals using a specialized circuit we processed the data on our Mbed. Once we determined that the user was chewing, a vibrational motor was buzzed, and would not stop until the user pulled out our android app and logged the meal that they were eating.
Muscle Sensor V3
We designed our EMG circuit to consist of a few different components. We needed an instrumentation amplifier to amplify the difference between the two muscle signals from a microvolt range to the millivolt range. Since we had signals that would oscillate between positive and negative voltages, we needed to rectify the signal to essentially take the absolute value of the signal by preventing the flow of current in the negative region of the input signal. Then we needed to smooth the signal by taking a low pass filter to discard any high frequency oscillations inherent to the original input signals. From there, we simply needed to amplify the signal to our liking from the now millivolt range to a volt range that we could read from with our ADC.
Initially, we started observing our EMG data using a BioPac MP35 to help determine what sites provide the best signal for our classification. Upon choosing the site on the side of the jaw, we implemented our first iteration of our EMG sensor.
Figure 1: EMG Iteration 1
However, the circuit we built (figure 1) was incredibly noisy, and we were unable to produce any discernible output. I then transitioned to creating a PCB circuit (figure 2) using higher quality parts that alone were less noisy.
Figure 2: EMG Iteration 2 (PCB)
Figure 3: Noisy EMG on PCB
However, when we received the PCB, our circuit ended up being just as noisy as our first implementation (figure 3). We then attempted a final iteration (figure 4) that did eventually provide an output that showed the correct EMG output (figure 5).
Figure 4: EMG Iteration 3
Figure 5: Working EMG Iteration 3
In order to begin classifying data we had to start with taking in raw data and visualizing it. This was done by using the Mbed and my laptop. We set the ADC of the Mbed to sample at 1 kHz because the maximum frequency of a muscle signal is 500 Hz. After capturing this data, we used a python script on my laptop to capture the data and output it to a file. After this file was written we then visualized the capture in MatLab as seen below. This was our first clear sign that the project that we were taking on was possible.
Figure 6: Training Data
After capturing minutes worth of raw data for both chewing and talking/rest, we had to classify the data. To do this we chose to use a moving window to break our data input into quarter second chunks and perform analysis on each of those chunks. To do this we used a program that broke the data into chunks and performed calculations on these chunks such as entropy and FFT intensity.
Performing these raw calculations on the data again gave us a good indication that we would be able to classify the difference between chewing and talking, but of course we had to validate these findings. To turn our intuition into code, we chose to use a machine learning tool called Weka. In order to pass our data into this tool we had to write a program to create arff files that the program would use as input to create a decision tree. This is the decision tree that we put onto our microcontroller(shown below).
Figure 7: Decision Tree
Now that we have this decision tree the next step was to implement it on the Mbed. We used a buffer to continuously collect data from the user and after enough data was collected we would process it and decide whether or not the user was talking or chewing. After this decision was made a for a long enough period of time we had to make sure the user understood that they were eating and should log their meal. This was done by causing a motor to vibrate and then waiting for the user to acknowledge and log the meal using MyFitnessPal.
We wrote an android app that had the main goal of making sure that the user correctly logged the meal that they were eating. The interface was straight forward and simply included buttons that the user could click to chose what meal they were eating. Upon clicking the button, we sent a http request to thingspeak to record the time of the meal. After the meal time is recorded, the user sent to MyFitnessPal to input the meal they had just consumed.
As we were constructing the VibeWeight package, we tested our data and analysis by continually creating plots of the data in MatLab and by using the oscilloscopes to quickly determine if the hardware was working. We were able to see how accurate our machine learning algorithm was by comparing the oscilloscope data with visual output from the MBed upon different events, such as when data is classified and when the system detects chewing.
In order to make sure our overall system worked correctly for other people, we tested it with a few other people. All in all, with four people testing the system, we were able to correctly determine when people were eating and present them with annotated meal events on our ThingSpeak channel while tracking their meals on MyFitnessPal.
This project taught us a lot about building robust machine learning systems that can be generalized to the population. Our goal was to create a product that would work for the general population with little to no required calibration. Developing our system required careful consideration of timing in order to best allocate resources to our classification, sensation, and communications.
We met all of our stated goals and some of our reach goals. We are able to classify chewing with over 90% accuracy, and therefore are able to correctly classify eating. We also created an Android Application to provide annotated events to the user while integrating other forms of diet tracking. As a result, VibeWeight has become a comprehensive diet detecting and tracking system.
From here, our next goals are twofold. First of all, we want to develop a cleaner package that will make our package more aesthetically pleasing. Additionally, we want to develop other means for detecting what people are eating to provide more tailored feedback for different types of food. For example, we should encourage people to shy away from sugary and fatty foods, while encouraging people to eat healthy foods from different groups throughout the day. We would then be able to help people develop plans and goals for their daily diets, ultimately making VibeWeight more of an active source of advice than just a passive source of feedback.