1 of 1

Style Sprout

B5: Gabriella Pimenta-Fujikawa, Alanis Zhao, Riley Krzywda

18-500 Capstone Design, Fall 2024

Electrical and Computer Engineering Department

Carnegie Mellon University

System Architecture

Product Pitch

According to the National Institute of Standards and Technology (NIST), 85% of all textiles end up in landfills or incinerators. By helping users maximize their wardrobe usage, minimize unnecessary purchases, and reduce reliance on fast fashion, Style Sprout reduces this waste and promotes sustainability.

Our app enables users to scan clothing items for automated classification based on type, usage, and color with over 60% accuracy. Style Sprout generates outfit suggestions tailored to user preferences, location, and wardrobe constraints, with 100% correctness for weather and usage requirements. Scanning takes an average of 2.03 seconds, while outfit generation averages just 0.904 seconds–ensuring speed and efficiency.

Style Sprout stands out with intuitive scanning and classification features, making wardrobe management accessible and sustainable.

Our project utilizes 3 ResNet50 classification models predicting clothing type, clothing color, and clothing usage based on an input image. They have been uploaded onto our Xavier NX for inference. On user input from a push

https://course.ece.cmu.edu/~ece500/projects/f24-teamb5/

System Description

Conclusions & Additional Information

This QR Code links to our website! Check it out for more information.

We are proud of our working implementation but given more time, we would improve our solution by raising the classification accuracy and improving the appearance and usability of our hardware setup. We learned lessons about the importance of data, keeping engineering ethics in mind, and the difficulties of hardware and software compatibility.

A possible extension of Style Sprout would be exposing user metrics like their style preferences, and least favorite clothing items. Another helpful extension would be to implement multi-user features, like allowing users to exchange their lesser-worn clothes.

Use-Case Requirement

Target

Actual

Outfit Generation Time

≤ 2 seconds

≤ 1 second

Scanning Time

≤ 3 seconds

≤ 3 seconds

Classification Accuracies

≥ 80%

61.25% clothing type

60% color

78.75% usage

Camera

Jetson

NX

button, an image will be taken with the camera and fed through the models. The image and predicted labels are then communicated to our backend.

The application’s backend is powered by FastAPI and handles database operations and outfit generation requests. Our frontend, built with Flutter, allows users to generate outfits and manage their closet by viewing all their pieces and changing item labels.

Push Button

Our backend is hosted on a local server and our database is hosted on AFS. We use a NVIDIA Jetson Xavier NX to accelerate edge-device inference on our 3 classification models. An Arducam B0522 camera and push button are connected to take photos.

Style Sprout stores all user-data, aside from images, in a MySQL database. The database includes tables that store clothing details, preferred color combinations, disliked outfit combinations, and user-specific settings like location and laundry thresholds.

Figure 1: Full-System Block Diagram

Figure 2: Classification Model Diagram

Our image classification models use the ResNet50 architecture which runs efficiently on edge devices. After inference, each image and their 3 predicted labels communicated to our database and S3. We chose to use S3 to keep user data secure. We’ve used Flutter to develop a mobile application for our users to interact with.

Figure 3: Hardware System

Figure 4: Database Structure

Figure 5: Frontend Interface

Figure 13: Distribution of total time to take image, classify, and upload

Figure 11: Distribution of total time to generate an outfit

Figure 12: Results from user testing

Figure 6-9: Effects of different image augmentations on classification accuracy

Figure 10: Comparison of time needed to populate closet with and without classification

User

Timing for Each Feature

Intuitiveness Rating

Functionality Rating

1

≤ 10 seconds

10/10

5/10

2

≤ 10 seconds

9/10

4/10

3

≤ 10 seconds

10/10

6/10

Figure 14: Target metrics for each use-case requirement and the actual tested value

System Evaluation