1 of 27

The STARR JetBot Project

Department of Engineering, Northern New Mexico College

Capstone I & II

Dr. Sadia Ahmed

December 4, 2020

Travis Moore, Joselinn Rascon,

and Dr. Steven Cox (Mentor)

2 of 27

Our Team

  • Joselinn Rascon is a senior pursuing a bachelor degree in Information Engineering Technology from Northern New Mexico College (NNMC).
  • Travis Moore is a junior in the Information Engineering Technology program at NNMC.
  • Dr. Steven Cox is our team mentor and an assistant professor in Engineering at NNMC.

3 of 27

Our Mission

  • To assemble and program a SparkFun JetBot to interact with its environment by leveraging artificial intelligence (AI) and machine learning.
  • To collaborate with Students & Teachers Assisting Real Research (STARR), an international group of computer scientists, engineers, teachers, and students, and NASA doing cutting-edge research.
  • To develop practical knowledge of robotics, Artificial Intelligence, machine learning, and computer vision.

4 of 27

JetBot Hardware

  • Our JetBots were assembled from SparkFun AI JetBot Kits, Version 2.1: https://www.sparkfun.com/products/16417

  • NVIDIA Jetson Nano Developer Kit
  • 64GB MicroSD card
  • Leopard Imaging 136FOV wide angle camera & ribbon cable
  • WiFi Adapter
  • Qwiic Motor Driver
  • Micro OLED Display
  • Lithium-Ion Battery Pack

(SparkFun, 2020)

5 of 27

JetBot Hardware, part 2:�NVIDIA Jetson Nano

  • The NVIDIA Jetson Nano
    • An AI development board featuring a Tegra System on a Chip (SoC)
      • 4-core ARM based CPU
      • 4 GB DDR4 RAM
      • 128-core GPU(!)
        • Sufficient for real-time image processing

(NVIDIA, 2020)

6 of 27

JetBot Software

  • The JetBot Operating System
    • Linux for Tegra (L4T)
      • 64-bit, custom distribution of the Linux Kernel
      • File system and GUI derived from Ubuntu

  • NVIDIA Jetpack Source Development Kit
    • CUDA-X accelerated libraries and APIs for Deep Learning and Computer Vision.

7 of 27

JetBot Software, part 2:�Controlling the JetBot

  • Python code is executed on the Jupyter Notebooks, controlling the Jetbot’s actions.

  • A Jupyter Lab server running on the JetBot allows remote access through a Web browser.
    • The Jupyter Notebooks, the JetBot file system, and the Linux terminal are all accessible.

8 of 27

Our JetBots

STARR Mission Patches

  • Jawz 
  • Shepherd

9 of 27

The STARR Organization

  • Students & Teachers Assisting Real Research (STARR) is an international collection of teams collaborating with each other to build knowledge about robotics and apply it to our JetBots.
  • We attend Zoom meetings twice a month organized by Russ Fisher-Ives, the founder of RoboRAVE, and headed by George Gorospe, a NASA Ames Research Engineer.
  • Mr. Gorospe conducts mini-lectures on topics ranging from neural networks, Linux administration, Python programming, and anything else he believes will be useful.

10 of 27

STARR, part 2

  • As part of our membership in STARR, we are assigned tasks, and assisting other teams is encouraged.
  • Each team has access to an account on ZOHO, a Customer Relationship Management (CRM) platform.
    • On ZOHO, we upload data our JetBots have collected, post questions, and help each other troubleshoot problems.

11 of 27

AI Training Cycle

  • Training the JetBot consists of 5 parts:

    • Step 1: Collect Data
    • Step 2: Label Data
    • Step 3: Train AI Model
    • Step 4: Define Behaviors
    • Step 5: Deploy and Test

(Gorospe, 2020)

12 of 27

AI Training Cycle,�Steps 1 & 2: Data Collection

  • We collect our data by positioning the JetBot and taking pictures using the JetBot’s camera.
  • Our data is labelled by the name of the directory it’s stored in.
  • These buttons are linked to Python code that will capture an image from the camera and store it in the appropriate directory.

13 of 27

AI Training Cycle,�Step 3: Train AI

  • We are using the open-source machine learning framework PyTorch (https://pytorch.org/) to preprocess our image data.
  • We are using the AlexNet (https://www.mathworks.com/help/deeplearning/ref/alexnet.html) convolutional neural network (ConvNet) to train our AI model.
  • After training, the JetBot will recognize whether new images captured with its’ camera correspond with “free”, “blocked”, “left”, or “right”.

But how?

14 of 27

Convolutional Neural Networks,�Part 1: Bitmap Images

(Sharma, 2019)

  • A greyscale bitmap image is a grid of pixels represented by numbers ranging from 0 to 255. Anything between is a gradient.

  • The Process:

(Deshpande,

2016)

15 of 27

Convolutional Neural Networks,�Part 2: Convolutions

  • A convolution extracts features from an image by applying filters to the image grid.

    • There are many variations of filters for edge detection, image sharpening, blurring, etc.

(Ujjwal, 2016-1)

Feature Map

Image Grid Filter

1

1

1

0

0

0

1

1

1

0

0

0

1

1

1

0

0

1

1

0

0

1

1

0

0

1

0

1

0

1

0

1

0

1

16 of 27

Convolutional Neural Networks,�Part 3: ReLU

  • The Rectified Linear Unit (ReLU) is a non-linear function that returns a value of 0 if the input is negative or zero, but otherwise returns the input.

  • ReLU is a simple function which allows for the training of models of much greater complexity than linear functions, while accelerating training by incorporating true 0 values. Previously, sigmoid and tan(h) functions were used to achieve non-linearity, but they produced worse results (gradient vanishing) and for a much greater computational cost (Brownlee, 2020).

17 of 27

Convolutional Neural Networks,�Part 4: Nodes and Layers

  • A single neuron, or node, consists of inputs (Feature Maps), a bias (b), and an output (Y).
  • Function ƒ = ReLU.

  • A group of nodes make up an input layer. Each node in a layer is connected to all nodes of the preceding layer.

(Ujjwal, 2016-2)

(Ujjwal, 2016-2)

18 of 27

Convolutional Neural Networks,�Part 5: Subsampling

  • Subsampling, also known as pooling, is the process of reducing the size of the feature map, while retaining the most important information. This makes the input pool more manageable. The Max() function is effective for this process.

(Srihari, 2020-1)

19 of 27

Convolutional Neural Networks,�Part 6: Back Propagation and Learning

  • Back propagation is calculating the gradients between a known, desired outcome and an actual outcome.
  • Learning is balancing the weights of the dataset with the gradients to produce a desired outcome (Srihari, 2020-2).

(Ujjwal, 2016-2)

20 of 27

Convolutional Neural Networks,�Part 7: The Payoff

  • The ConvNet depicted below consists of many layers and 2 sets of Convolutions, ReLU cycles, and Subsampling. The final output is the image classification (a probability).

(Ujjwal, 2016-1)

  • 3-D Representation of a Neural Network with many connected nodes at: https://www.cs.ryerson.ca/~aharley/vis/fc/

21 of 27

AI Training Cycle,�Step 4: Define Behaviors

  • We define our JetBot’s behavior based on the classification probabilities it receives from the model we trained.
  • This image shows the probability that the JetBot is “free” is 99%.
  • This Python code defines the JetBot’s behavior. In this case, it will drive forward.

22 of 27

AI Training Cycle,�Step 5: Deploy AI & Test

  • The final step in the cycle is to run the JetBot on our course.

  • Here is a link to a video of our JetBot in action:

23 of 27

What’s Next?

  • Train our JetBots to work and communicate together. 
  • Keep training our JetBots to be able to adapt to any environment/course.
  • Adding new sensors to the JetBots for multi-modality.
  • Continue working on new tasks to get ready for the STARR Showdown (competition).
  • Joselinn will mentor a Dual Credit student, Emily, from Espanola Valley High School to learn Python machine learning, AI, and troubleshooting her JetBot.

24 of 27

Special Thanks To:

  • Paige Prescott of the Computer Science Alliance for sponsoring our team and purchasing our JetBots!
  • Dr. Steven Cox: It’s an honor to work with you, sir. Thank you for you time, effort, and kindness.
  • Dr. Sadia Ahmed: For your support, and the opportunity to be part of this team.
  • Russ Fisher-Ives and George Gorospe for the STARR program.
  • Liliana Moore: For convincing us to build JetBots, for building the courses, and for always getting Shepherd unstuck.
  • Joyce Rascon: For building the mazes and being our photographer.
  • Jonathan Rascon: For letting us use your gaming monitor and carrying it around for us behind Jaws.

25 of 27

References

26 of 27

References

27 of 27

References