due dec 13 2019
BeeBot, The Composting Robot
Members: Dylan Arceneaux, Sophia Batchelor, Stephanie Daffara, Mu-Ti Huang
Designing for Emerging Technologies, Fall 2019
Beebot is a robot built to make composting easy. It is a robotic composter which arrives at a Beebot stop or your door to collect compost and protect the environment. BeeBot takes your compost to a secret area where your compost is used to nurture trees and other flowering plants. The secret area is unique to each BeeBot, with each BeeBot having seeds and watering components to tend to its plants.
Identity and Design 15
It is no joke, nor an understatement, to say that humanity is slowly destroying nature. As nature has provided so much for us, from oxygen to land, to beautiful food, sights and living animals and plants, we have given very little in return. On the other hand we have constructed and invented so much, yet our inventions are still harmful to our planet. This is where our project steps in. BeeBot is a robotic composter, taking people’s leftover scraps, and turning it into food for plants. Not only that, it also keeps the location of its planting a secret, so that no human can find and harm it, and safe if needed.
BeeBot is envisioned to run as a fleet through cities. They will have scheduled stops, much like a bus stop, where compost is dropped off at, and then taken to their secret garden for growing plants. Our robot can carry, transport, move through rough terrains, cut up the compost, deposit it, plant seeds, and water it’s plants, all within its shell. It is a fully functional composter created by humans that saves nature from us.
Traditionally robotic systems have been implemented to assist, or replace humans in various tasks. They are used in assembly lines to improve manufacturing time and accuracy, in bomb defusal to avoid human injury, and to complete tasks that are difficult or dangerous for humans to perform. Thus, the role of robots in society has traditionally been to fulfil a service. This is reflected in popular futuristic sci-fi depictions of robotic systems “taking over” humanity following a traditional student defeating master, subject uprising, and coup narratives. Our initial brainstorming sessions built into this idea; “what could a robot do for us”. Through experimentation we learnt that the Husqvarna robot could carry the weight of a person (image below).
Classmate with the robot.
To that effect, we started brainstorming different uses and ways that we could use the specific strengths of the Husqvarna.
List of our initial ideas on a whiteboard.
Notable Ideas from our brainstorming:
Scanning The Model
Image of 3D scanning the Robot to use it’s model for fabricating a new shell.
To get a better grasp of the robot we were going to build from, we 3D scanned the lawn mower, with and without the shield.
Now that the bot was scanned, it was time to learn about the software stack that made the robot move. The Robotic Operating System (ROS) that had been flashed onto our SD cards was running conflicting versions (jessie kinetic, buster indigo) which larger limited the interactions that could be used with the bot. Our group went through a crash course in ROS and dove into development.
Upon learning more about ROS that we had been given to teleoperate the robot with, it was discovered that ROS could interface with the web and thus was capable of connecting with the 2019 brain-computer-interface device “Notion” which also used websocket protocols.
Notion is able to detect various neural patterns formed by the synchronized firing of neurons, what this means is that the system could learn what a specific neural fingerprint for (almost) any person is when they think of “up”, “left”, “right”, “down”, “triangle”, “pinch”, among various other event related actions. If Notion could be connected via websocket to the robot running ROS, an individual could think of a triangle, and if a paint brush was attached, the robot would move and paint a triangle. This project was continued with for the first half of the project before pivoting due to various constraints around internet protocols allowed on school wifi, and the fact that there is a general lack of StackOverflow answers for “how to connect brain to a robot”.
The conflicting versions of ROS currently on the SD card meant that in order to interface with ROS beyond the supplied keystroke teleoperator, we would need to rebuild ROS. A new SD card was used and a new image was constructed that ran kinetic. It was a wonderful experience learning about swap files to get around the 1GB of RAM on the RaspberryPi, our first time with a Linux operating system, and first time playing around in lubuntu. It was a long struggle to get the necessary files onto the Pi in order for it to run ROS successfully, and in addition the standard “git pull” would not correctly install the ROS system architecture. Once up and running we pulled the Husqvarna research platform into the ROS environment and started to control the robot with the same keystroke teleoperation that was supplied in the original SD image.
The moment the bot successfully ran the teleop.py script with the new image
Unfortunately the VNC viewer software that we had been using through out the Pi to remotely interface with it would not run on the Linux/Pi/Ubuntu system and so a 10 meter HDMI cable was used with a monitor, along with a peripheral keyboard and mouse.
The BCI-bot (brain-bot) proved to be immensely challenging as it has not been done successfully by any known team to date - this is large in part because of the tech stack that had been accessible for brain-computer-interfaces, which was why Notion was used. The idea of the BCI-bot was for someone to imagine the strokes of a paint brush on a canvas, and without having the human lift a finger, to have that art piece be painted. We wanted to critique the idea of robots as a service and have a collaboration between the artist (person) and the painter (artist). The performance of making art from a mental picture in your mind evoked the well quoted interaction in iRobot between Will Smith’s human character and the semi-conscious robot.
WS: “...can a robot turn a canvas into a beautiful masterpiece?”
Robot: “Can you?”
There were various learning curves for our group associated around working with a brand new tech ecosystem (ROS, nodejs, ROSbridge, HTML) and there were numerous battles between ourselves and the data debt we accidentally created 24hrs earlier.
We successfully opened up a websocket and were able to control a simulation of the robot through the webpage.
Websocket open and receiving data.
Test webpage that connected to ROS
Robot simulation (turtlesim) that was being controlled by Notion.
After speaking to the instructors, we found out there was a large number of art creations with robots already. The idea was difficult to get across that one could control a painting robot with your mind by training a BCI system to recognize synchronized and desynchronization events of neural patterns. Thus, we decided to explore other options and Art-bot was no more.
This time, we looked at the top problems in the world and brainstormed what a robot with such capacity would be able to contribute to solving them.
List of ideas from the second brainstorm session
Notable Ideas from second brainstorming session.:
We returned to our initial discussions around what the role of a robot was and decided to play off the idea of robot uprisings destroying the planet.
Imagining of the world after robots take over as depicted by Terminator.
And instead play with robots as a medium for protecting the earth from the humans.
In the world today over 90 billion pounds of food ends up wasted, with 31% of food supply ending up in landfills. In addition, the food that sits in landfills can’t breakdown properly causing it to release methane, which is one of the greenhouse gas contributors. Sadly, this contributes to 8% of global greenhouse gas emissions.
One way to manage this issue is by composting. It provides wonderful benefits such as being a superfood for plants keeping them safe from pests and supercharged for growth. On top of that, composting is actually an easy and simple process. The compost material that people throw away properly goes through a composting process which chops and shreds the kitchen scraps, combines them with leaves and bark, and then adds water to it. This process produces compost over a period of about three months.
Still people do not compost! This is mostly because it is hard to maintain. Usually the compost bin smells and keeping it in a kitchen can create an unsafe cooking environment. Also many buildings do not have access to a compost bin or truck.
To manage this issue we created BeeBot.
BeeBot allows you to compost daily without having to collect too much food scraps within your kitchen. It will pick up your compost, and use it for planting!
The Features include:
The interaction with BeeBot starts with the user walking up to a BeeBot stop or schedule one to show up at their door for pick up. There is a companion BeeBot Map App that shows users where the nearest stop is, along with the schedule for that stop.
Then the user opens the hatch on BeeBot and places their compost waste in the internal bin. Then When BeeBot is done collecting for the day (or they are full) they will ride off to their secret garden location. Next the compost is chopped by BeeBot’s motors. BeeBot then disperses a seed along with the compost onto the ground. Finally, water is poured onto the compost pile.
There were two ballast tanks on either side of BeeBot in order to distribute equal weight.
Angles of the modular “Bee” system that clipped onto the “Bot”.
3D printed hinges that were held by magnets.
Undercarriage of the BeeBot
3D-printed stand for snapping onto the bot
The motor and the blade for the compost dispenser
Initial designs of the blade included thin blades that would spin on a 360 degree rotation to chop the compost. However, there was no stopping mechanism between the compost collection bin and the funnel. We revisited the blade design and cut a large fan shape that could utilize a 180 motor and overlap with the struts in an “off” potion to stop compost falling through.
There were two parts to BeeBot’s code.
The Interaction Code
BeeBot planning code.
BeeBot had 3 motors and 1 additional control system in the microswitches that were attached to BeeBot’s doors.
A 180 servo motor was used to control the blade which would “chop” the compost by rotating rapidly between 180-0 degrees. It was decided to have a single large fan blade that would largely cover the gaps which allowed compost to drop through. Future iterations would include a double blade system or a hatch and bladed system to allow one blade dedicated to chopping compost and another larger piece to function as a “stopper” to stop compost from immediately falling through BeeBot’s funnel.
An additional 180 servo motor was used to open and close doors that released seeds into the compost. The water pump was a DC motor that had an “on” (1) and “off” (0) function.
BeeBot also had Audio and Light control scripts.
Both of these were used in order to inform users about the different states BeeBot could be in.
The sounds were chosen based on the interaction that they were involved with and what would best convey the message about BeeBot’s functionality during that interaction.
There were 5 light settings for BeeBot.
No lights for standard driving mode.
White lights whenever the hatch was open.
A temporary yellow confirmation light when the hatch was closed.
Green light for planting.
Blue light for when BeeBot was watering.
The light colours were chosen to be most intuitive for a bystander or user to understand.
Given the concept of the product, we decided to name the project BeeBot because bees are a polylectic animal. The action of collecting compost from people and planting around is similar to a bee's relationship to flowers. In addition, how the doors for the compost open on the top of the robot resembles the look of a bee.
Besides designing the bee for the logo, we also design the wings to be the symbol of infinity, hinting at the sustainability aspect of the product.
In terms of color, we didn't want the addition of BeeBot to create visual disruption in nature. Therefore, we decided to color the product green.
Bee Bot has a lot of potential to be put into the real world. But in order for it to be better at what it does there would have to be some sort of planning and studying behind where the Bee Bot stop would be and how often they should stop by them. Also, in order to support this project we would start a conversation with city planning and try to get subsidies from the cities we wish to implement this Bot.
For the compost and storing of materials it would be useful to add Computer Vision inside the bot that monitors what is being deposited, this way it can get rid of items that are not compostable.
To better the user experience we’d like to implement automatic self-opening and closing doors so that the user does not have to physically touch the bot. This would provide a safer and cleaner interaction. To better the safety we would also have the blade and hatch be in two separate compartments. This not only would make the bot safer for people sticking their hands in (even though the blades stop once the hatch is open) but it would also give us more control of when to push compost out.
Another interesting addition would be a reward system that incentivizes users to compost more. Maybe the user would get a score and a message that says “Congratulations! Your compost has feed 3 new plants this week.” We also think that this data, such as how much people are composting, what are they composting, what neighborhoods are falling behind on composting could be great information for city administrations and general public knowledge.
Finally, the bot is already designed in an extremely modular way, where you can simply snap on and off the shell. This way, one could maybe exchange shells depending on what they want to use the robot for.
 “Event related potentials”
 Cal Visitor’s permissions
 That was found out “the hard way”.
 It also helped to have the neuroscientist who built some of Notion’s stack on the team - although OpenBCI’s ganglion and gTec’s Unicorn were also considered and prototyped with.
 “Eve” was a previous name for BeeBot and was preserved in the codebase
 If a seed is planted in a forest and there’s no one to hear it...