Spring break - 4/5/18

The group worked on assembling the quadcopter in order to enable us to begin testing it with OpenCV software. First, we used different materials to 3D print the part (ABS and PLA) to see which would be lighter. Here are the pictures of both materials:

Then, we soldered all the components together, which involved the flight controller, receiver, and the motors. Finally, the camera was soldered onto the flight controller and a LiPo battery connect was soldered to enable us to power the quadcopter. The camera was covered by a canopy that was made for a quadcopter called Tiny Whoop.

Finally, everything was glued together, but the micro USB connector was left exposed in order for us to tune PID. The clamps seen in the picture are used to press together the flight controller into its socket to ensure proper gluing. The 4 motors can also be seen.

This is what the drone looks like after all the parts have been put together. The propellers turn the same direction on each diagonal. The receiver and camera antennas are visible from this image. The clear camera canopy is glued/taped onto the main 3D printed frame.

This is the software that we used for tuning, where we specified the type of board we are using and configured some other things, like calibrated the accelerometer. This is where we set up the PID feedback loop.

4/9/18 - Flying Day

This is our charging for the batteries. Each one takes about 1 hour to charge. Since we have 5 batteries and the flight time is at least 2 minutes, we will be able to fly the drone every 10 minutes during the Makerfaire, unless we get more batteries.

Here you can see our quadcopter flying in the Old Pit of our school. We were practicing turning, avoiding obstacles, and landing in order to make sure that we don’t make any mistakes during the actual Makerfaire.

We collected some data on the battery life of the quadcopter, and it usually lasts for almost 5 minutes since we never fly it on maximum velocity.

4/12/18 - Code Day

Here is the code that we wrote today that performs pedestrian detection on a video. It works sometimes. It is pretty poor in general because the video quality is bad and the frame processing takes a long time. To address this issue, we will soon try processing every 5th frame instead of each frame and increasing the resolution of the image being processed (not scaling it down as much).

#imports libraries

from imutils.object_detection import non_max_suppression

from imutils import paths

import numpy as np

import imutils #need to install with pip

import cv2

 

cap = cv2.VideoCapture("video.mp4")

#cap = cv2.VideoCapture(0) in our case

# initialize the person detector

hog = cv2.HOGDescriptor()

hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())

while cap.isOpened():

        ret, image = cap.read() #read frame

        image = imutils.resize(image, width=min(400, image.shape[1]))

        orig = image.copy()

 

        # detect people in the image

        (rects, weights) = hog.detectMultiScale(image, winStride=(4, 4),

                padding=(8, 8), scale=1.05)

 

        # draw the original bounding boxes

        for (x, y, w, h) in rects:

                cv2.rectangle(orig, (x, y), (x + w, y + h), (0, 0, 255), 2)

 

        # apply non-maxima suppression to the bounding boxes using a

        # fairly large overlap threshold to try to maintain overlapping

        # boxes that are still people

        rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])

        pick = non_max_suppression(rects, probs=None, overlapThresh=0.65)

 

        # draw the final bounding boxes

        for (xA, yA, xB, yB) in pick:

                cv2.rectangle(image, (xA, yA), (xB, yB), (0, 255, 0), 2)

 

        # show the output images

        cv2.imshow("Result", image)

        cv2.waitKey(50) #ms delay

cap.release()

cv2.destroyAllWindows()

Here is a video of a video clip with pedestrians being detected by OpenCV.

4/14/18 - 4/26/18

The group worked on making a power distribution board for the drone. All of the components are mounted on a MDF Board. The components include a power supply, a 12V regulator (for video receiver), a 5V regulator (for the charging), switches to toggle various components, and a small monitor for FPV flying. All of the components are attached to the board with zip ties. To make this, the group utilized various makerspace tools such as the drill, screws, and soldering irons.

4/21/18 - Testing Day

The quadcopter was flown outside to see how the battery would perform and how large the range is. The battery lasts for about 3 minutes (since we now flew it at higher speeds) and the range is more than one soccer field. The flight suffers in windy conditions, so it looks like our quadcopter would be better-suited for indoor missions.

4/27/18 - Testing the Board

For the first time, we attempted to turn on the power distribution board, and none of the components were receiving power, which convinced us that we should have tested after small steps instead of assembling the whole thing and only then trying it out. We decided to meet after school to troubleshoot, using multimeters for testing voltage and continuity (in case of an accidental short).

5/18/18 - Testing flight with Board

Today, we flew the quadcopter using the board that we designed (FPV mode). Below is an image of the operation.

5/21/18 - Broken Motor

After multiple flights, the shaft of one our motors came out. The drone kept on flipping in the air, so we had no choice but to replace the motor. We unsoldered the old motor and put in a new one. The drone was able to fly with no problem.

5/22/18 - Final Code

Here is some edited code that skips frames in order to give the computer more time to process the image.

from imutils.object_detection import non_max_suppression

from imutils import paths

import numpy as np

import imutils

import cv2

screen_size=(1920,1080)

 

cap = cv2.VideoCapture(0)

# initialize the HOG descriptor/person detector

hog = cv2.HOGDescriptor()

hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())

frame = 0

cv2.namedWindow("Result",cv2.WND_PROP_FULLSCREEN)

cv2.setWindowProperty("Result",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)

# loop over the image paths

while cap.isOpened():

        ret, image = cap.read()

            frame += 1

            if (frame % 5 == 0):

              frame = 0

              image = imutils.resize(image, width=min(300, image.shape[1]))

                 orig = image.copy()

               # detect people in the image

               (rects, weights) = hog.detectMultiScale(image, winStride=(4, 4),padding=(8, 8), scale=1.05)

                # draw the original bounding boxes

                for (x, y, w, h) in rects:

                    cv2.rectangle(orig, (x, y), (x + w, y + h), (0, 0, 255), 2)

                # apply non-maxima suppression to the bounding boxes using a

                # fairly large overlap threshold to try to maintain overlapping

                # boxes that are still people

               rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])

               pick = non_max_suppression(rects, probs=None, overlapThresh=0.65)

                 # draw the final bounding boxes

                 for (xA, yA, xB, yB) in pick:

                    cv2.rectangle(image, (xA, yA), (xB, yB), (0, 255, 0), 2)

                  # show the output images

                  cv2.imshow("Result", cv2.resize(image, screen_size))

        key=cv2.waitKey(1) #ms delay

        if key==27:

                cap.release()

                cv2.destroyAllWindows()

                break

cap.release()

cv2.destroyAllWindows()

5/24/18 - Reflection

Overall, our project worked well during the Makerfaire. The drone could be controlled without problem, either by directly looking at it or by using the FPV screen. We extended our original goal by constructing a power distribution board that allowed us to distribute power to the FPV screen, battery charger, and other components. One limitation of the project was that the computer vision did not work as well as we planned. We partially solved the problem by analyzing only every 5th frame in order to give the computer more time to analyze the image. As a result, the computer vision technology worked decently enough for our drone to identify people. As previously stated, the flight was smooth and actually better than expected due to the technology that we used for PID calibration. This was a successful project, and we enjoyed presenting it into the makerfaire.

Image detection images: