To identify the color, shape, and size of the Lego bricks, we used binocular computer vision. One camera is positioned directly above the brick, and the other is directly to one side. When the plate the brick sits on turns to resemble the letter L, each camera gets a square view of one side of the Lego.
Each camera rest in this 3D printed camera mount
Using OpenCV and shape recognition software, each camera draws a rectangle around the Lego, distinguishing the red, blue, or green brick from the white plate. The rectangle is only two dimensions of a three-dimensional object, but between the two views the computer can extrapolate a three-dimensional object. The top camera sees the Lego’s length and width, and the side camera sees the Lego’s length and height, so with both of them it is possible to figure out the size and shape of the brick. This is also why most animals have two eyes – it makes 3D sight much easier.
To identify the brick’s color, the camera uses OpenCV to read the average color within the rectangle it has recognized. By reading the average, it accounts for things like shadows that would change the color of part of the brick. Since all the bricks are on a white background, the color stands out very obviously, and is much easier to read compared to the shape.
The software system for the LEGO sorter depends heavily on computer vision, specifically the OpenCV library, which we are running off of a Raspberry Pi. Our final set up is comprised of a two camera system. The cameras are mounted 90 degrees apart so that they can capture the isolated brick in the pate from multiple angles and detect its characteristics. Color and size are the two distinguishing traits, both of which our sorter is able to detect and differentiate by integrating the information from both cameras to make sure all three dimensions are accurately accounted for.
OpenCV proved to be a challenge to work with since it is very susceptible to light and positioning. To overcome the obstacles to the best of our abilities, we controlled the lighting, we have secure mounts thanks to the mechanical team, and our thresholds are specific to our environment. Although OpenCV is very particular, it has many algorithms and classes that help with image processing of which we took advantage.
We started by detecting the four main LEGO colors: red, blue, green, yellow. Using dynamic trackbars, we were able to find the optimal thresholds for each color were specific to our environment. For each color, a mask was made. The mask is an image that is comprised of only black and white pixels. Pixels with color values between the denoted upper and lower bounds are translated to black and all of the remaining ones become white. Then, we are able to cycle through each color filter and check if an object has been detected. When an object is finally detected, in other words, there there are black pixels present in the mask, we know that the current LEGO is the color of whichever mask was applied at that moment.
Size detection proved to be considerable harder than color detection. In order to make sure all three dimensions of the isolated LEGO brick get accounted for, we have a two camera set up. One camera looks directly down at the LEGO square on and the other looks at the side view. Regardless of how it lands on the plate, the cameras will be able to gather information about all three dimensions.
After a LEGO lands on the plate, it turns so that it is inline with the cameras. Then, using Otsu Thresholding and Canny Edge Detection on each camera’s output, we place a bounding box around the detected shape. Since now we are working with rectangles, the process is more robust than trying to detect a 3D object with one camera.
Using serial communication, the python script can communicate with the Arduino once it has detected a LEGO. Depending on the charastics detected, an integer is sent over the serial port which then tells the Arduino how to move so that the correct cup is under the ramp and the brick can be sorted accordingly. To eliminate long, cumbersome if-else logic that checks for each individual possibility, we implemented additive system where each characteristic has a value and the combination of a color and size add up to a unique integer that then serves as the code passed to the Arduino.