Jessica Cooper

University of St Andrews


Curious Bot is prototype interactive autonomous robot which mimics lifelike behaviour by exploring its environment and responding to sound using machine learning. (Initially conceived as a temporary stand-in for my distant cat, although it is far more obedient than she.)

However, Curious Bot is much more than just a cat understudy - it is an inexpensive and open source prototype tool for exploring how people relate to robots that simulate living creatures. This is a concept frequently seen in science fiction and popular culture, where robotic companions are portrayed and interacted with as if they were conscious, living entities with drives and personalities of their own. (Whether a machine could ever be capable of true consciousness is another argument, and the subject of long debate (Searle 1980).) Indeed, humans have a well documented tendency to ascribe intent and personality to machines and treat objects as anthropomorphic (Sandoval, Mubin, and Obaid 2014), and these ideas are increasingly important as autonomous robots become more pervasive in society. Curious Bot is a rather weak simulation of life, but acts as proof of concept of a small, cheap robotic system that is capable of emulating a living creature to some extent.

Author Keywords

Arduino, Processing, Wekinator, Machine Learning, Autonomy, AI, Dynamic Time Warping, Speech Recognition, Social Robotics


The Curious Bot system consists of five parts: the hardware itself, consisting of sensors, motors and the Arduino Uno; the Arduino software which implements basic behaviours; the MFCC input program which extracts features from an audio input (in this case, my laptop’s built in mic) and sends them to Wekinator; Wekinator itself, which is a machine learning tool that contains a trained model which maps audio inputs to different classes and outputs those as OSC data; and finally Processing, which listens for that OSC data and forwards it as serial via bluetooth to the Arduino microprocessor, and thereby allows the robot to respond appropriately.


Arduino Uno; TBS2651 Chassis & Wheels; 2x DC Motor; 4AA Battery Box; 9V Battery with DC; SG90 Servo Motor; L298N Motor Controller; 9V Battery with DC; Breadboard; HC-SRO4 Ultrasonic Ping Sensor; HC-06 Bluetooth Serial Transceiver; 3x Sparkfun Sound Detector.


The Arduino software acts as Curious Bot’s brain, and implements default behaviour such as exploring, avoiding obstacles and reacting to sound in the environment. The obstacle avoiding code is loosely based on this, but the rest is mine. It also listens for serial input from processing, so that it can respond to voice commands if given, and perform actions based on the data received.

  1. MFCC

The wekinator website provides an MFCC input program (Example Code | Wekinator) which takes audio input from a microphone (in this case, my laptop’s internal mic), extracts the MFCC features and sends them to Wekinator.


Processing is used as an intermediary, which listens for Wekinator’s OSC outputs and transmits them as serial via bluetooth back to the Arduino. The Processing code is adapted from the example given on the Wekinator site (Example Code | Wekinator).


Wekinator is first employed to train a machine learning model to map sound input to a number of states - I have used dynamic time warping in this case to identify voice commands. Once training is complete and the model has learned to reliably map the sound input to distinct classes, it can then be used to listen for voice commands and return the corresponding output, which is then sent back to the Arduino via Processing.


In its current incarnation Curious Bot is capable of the following:


A key aspect of exploring how humans relate to robots is how the robot in question looks - and so some kind of attractive chassis or case would be a natural step. I have modelled a very basic one here (adapted from this) which could be 3D printed and would simply slot on top of the existing chassis. A better one would have moving parts to accommodate the motors.

Additional sensors could be added to expand the range of inputs, such as touch, more sophisticated vision, light et cetera. The machine learning model could then be extended and retrained to map these additional inputs to interesting behaviours and thereby allow Curious Bot to respond to people and the environment in a more lifelike way.

My initial intention was to have voice input received by the robot’s sound sensors, so you could speak directly to it - unfortunately the sound sensors I have are not sophisticated enough to provide input of a sufficient resolution to make this possible. Better mics and quieter motors would solve this problem, which I think is important, as it breaks the illusion of life to speak to the robot via a computer. Nonetheless, people do seem to react to Curious Bot as if she were alive - I think that says more about our tendency towards anthropomorphism than CB’s convincingness!

Another approach would be to have Curious Bot learn how to respond to stimuli - it could start with only random behaviours (plus perhaps a few basic responses, which could be seen as inbuilt instinct) and map inputs to new behaviours by reinforcement learning - for example, if a human calls the robot and it happens to come closer, this behaviour could be reinforced by having a touch sensor mapped to a reward function (e.g. to simulate a reward pat on the head), which over time would enable Curious Bot to be trained to do anything that its combination of inputs and outputs could model. One would assume this would be a lengthy but rewarding task.

“Example Code | Wekinator.” Accessed December 22, 2017.

“Low Poly Cat - 3d Model” Accessed December 22, 2017.

Sandoval, Eduardo Benitez, Omar Mubin, and Mohammad Obaid. 2014. “Human Robot Interaction and Fiction: A Contradiction.” In Social Robotics, 54–63. Springer, Cham.

Searle, John R. 1980. “Minds, Brains, and Programs.” The Behavioral and Brain Sciences 3 (03):417.

“Obstacle Avoiding Robot Using L298N.” Accessed December 22, 2017.