1 of 12

Intelligent Picking

Team Name : Excalibur

Institute Name: Indian Institute of Technology Bombay

2 of 12

Team members details:

Team Name

Excalibur

Institute Name

Indian Institute of Technology Bombay

Team Members

1 (Leader)

2

3

4

5

Name

Tejal Ashwini Barnwal

Dikshant

Shreeya Shrikant Athaley

Leena Chaudhari

Navjit Debnath

Batch

2023

2022

2023

2023

2023

Area of expertise

Chemical Engineering

Civil Engineering + Electrical Engineering(minor)

Chemical Engineering

Mechanical Engineering

Aerospace Engineering

3 of 12

Functionalities of the Robot:

  • Robot assembly consists of- Master bot carrying a robotic arm for picking and stowing objects which will be accompanied by a slave bot for storing items in case of variable distance
  • What all can the robot do?
  • Recognize any object and segregate it from the bunch of objects using CNNs
  • Robotic arm can plan its motion till grasping point by using inverse kinematics principle
  • Adjust the grip aperture of the end effector according to the dimensions of the object
  • Place the object in the appropriate white grids of drop area according to the requirements
  • Navigate around the world easily by creating a map on its own and planning shortest path
  • Place the object one by one in the slave robot avoiding the dropping issue while navigating
  • Are there any things that the robot can do above and beyond the requirement?
  • Object detection and its segregation
  • Address the problem of SKUs
  • Are there any out of the box functionalities?
  • Look for barcode, read it and maintain a checklist of each item handled
  • Works for any distance between the pick up and drop area, because practically there is a large distance between both of them. Since, in the P.S. it is fixed to 20cm only our master bot can do all the functionalities but if there is some variations to which our arm can’t extend then, slave bot will come into role and aid us to achieve the required task easily.

4 of 12

Robot Specifications:

Technical Specifications:

  1. Robot’s computer on MASTER bot-Linux based embedded system(Raspberry Pi 3b,Jetson tx)
  2. Microcontrollers: Arduino Mega and NodeMCU for control and communication between bots
  3. Laser distance Scanner(RP Lidar) for building the map using the SLAM technique
  4. Depth Perception(Intel RealSense) for finding out the exact distance between the object and the arm
  5. Camera module: RPi Camera/e-CAM130_CUTX1 - 13MP Jetson TX2/SainSmart IMX219 AI Camera Module
  6. Inertial Measurement Units(MPU6050)(x3) for measuring the orientation of both the bots and arm
  7. Actuators:
  8. DC motors(x8) with encoders on MASTER and SLAVE bots with L298N as motor driver.
  9. Stepper motors(x5) to achieve relative movements across joints with L298N as motor driver.
  10. DC servo motor(x1) to control the grip aperture.
  11. Batteries: Lithium Polymer(number can be decided according the desirable torque for the motors)
  12. Force sensitive resistors(x2) for verifying that object is stowed and how much force has applied to it
  13. PC(OS-Linux)that is always connected to embedded system

5 of 12

Physical Specifications:

  1. The robotic arm has 6 degrees of freedom.
  2. Structural steel-robotic (density =7850 kg/m^3)arm, chassis - structural steel (for both master and slave bot)
  3. Parallel gripper. Max range of gripper is 22.5cm
  4. Joints in the robotic arm-5(all revolute)
  5. Size specifications:
  6. Chassis of Master bot : Max length=0.75m Width=0.50m
  7. Chassis of Slave bot : Max length=0.9m Width=0.8m
  8. Diameter of wheels : 4 inches
  9. Maximum height the bot can reach : 1.5 m
  10. Container of slave bot : Length=0.75m Width=0.75m

Electrical connections:

Link to the PCB schematic and board files

6 of 12

PCB

7 of 12

Robot Visualization -3D Diagram/Sketch

MASTER BOT

SLAVE BOT

8 of 12

Architecture

The robot consists of - master and slave(in case the distance between pick up and drop area is large).

The master robot has Single Board Computer(SBC) + microcontroller ; slave has a single microcontroller board.

The SBC is in communication with PC through TCP/IP protocol.ROS would be installed on both the SBC and the PC, but all nodes would be configured to use the same master, via ROS_MASTER_URI which would be on SBC.

The execution of the robot is as follows-

  1. Grasping: Once we place it at the edge of the pick area, (using the 2D camera) the images of the pick area would be taken. Image Manipulation operations are done using OpenCV(Otsu Binarization, Canny Edge detection, Dilation, Image filling, and Blob’s analysis is performed) using the CV_Bridge package. As a result, the objects are detected and a rectangular bounding box will be formed around the detected object which would be cropped as outlined by the bounding box. The cropped images will be sent to 2 CNN models as input-

a. The multi-classifier neural network to classify the images and b. Grasp detection

  • Arm movements: Once we get the grasping point, motion planning for the gripper to reach that position is performed by the MoveIt package of ROS. The feedback from the force sensors would be used to control the grip force applied by a servo to pick the object. Once the object gets picked, MoveIt package would be used to place it in the basket of a slave robot. We will use inverse kinematics to find out the angle required and by using task space dynamics we will apply PD control.
  • Localization and Navigation(our extra point): Using the internal sensors(IMUs, motor encoders, and laser scan) for state estimation and updation,SLAM would be performed before the robot can start with picking the items. Once a map of the environment is ready, the robot will navigate to the pick area(using A* motion planning algorithm).

After these operations master bot would navigate to the drop area, followed by slave bot.This communication is also achieved using SSH.

  • Stowing: As soon as the robot reaches the edge of the drop area, a 2D image would be taken. Image manipulation would be done to get the coordinates of each small square grid and navigation would be performed to that point thereafter placing the object one by one in all of the small square grids.

9 of 12

Robot/Solution Limitations

  • What can the robot not do ?

Given below are some of the limitations of our bot:

  1. Dataset: We will require a large dataset for training our CNN model and this will take a slightly larger time.
  2. Training time: Computation time for training our model will

also be typically larger in starting but once it is done, there are

no worries at all.

  • Precision: Slight deviation in applying controls for the joints, as

the motors centrifugal and coriolis terms in the equation of

motion, we are considering this term to be negligible because

it is difficult to calculate this term by hand. So, there will be a

slight deviation from the point that we want to grasp and

the actual point. We will apply PD control to neglect this defect as well.

  • Are there any limitations compared to the requirements?

There is no as such limitations as compared to the requirements.

10 of 12

Brief on Programming Module

Programming language used :

Python,C++

Software modules used:

Arduino IDE,OpenCV,pytorch(using pytorch-ros via libtorch),ROS framework,Gazebo,RVIZ

Packages to be used in ROS:

MoveIT!,CV_Bridge,Raspberry Camera Pi library,librealsense,realsense camera

Packages to be built in ROS from scratch:

Master_bringup,Master_slam,master_navigate,master_vision,master_3dperception,master_gripforce,slave_bringup,slave_navigate

CNN models to be built:

1.Multiclass image classification

2.Grasping detection

References: Find them in this link.

11 of 12

Execution Plan

  1. We have made the CAD model of our bot, though the CAD designing process will be an iterative process, i.e. current design of the bot is subject to a few changes to cater the needs of the pick mechanism even more efficiently. Structural simulations will be performed in Ansys to do stress analysis of the bot. We can strengthen the chassis(if needed) and replace the solid structural components of the arm with parallel panels and trusses joining them.
  2. Next step will be converting this into URDF format so that we can do the simulations of the manipulators and bot in Gazebo using the MoveIt package and cross check our calculations of inverse kinematics and trajectory mapping.
  3. Then, according to the weights of the objects to be picked up, we will do the calculations of the required motor torques of the chassis and the power needed to drive the bot in case of variable distances.
  4. After validation in simulations using ROS and Gazebo, we will start building our bot and arm(procuring materials needed and start building the prototype). We will incorporate the required sensors, camera module to the arm and place them in such a way that our task is accomplished.
  5. Meanwhile, we have written the code for the CNN and will start training our models for object detection and grasp detection.
  6. Once our models are trained we will start testing it on various objects and validate our results on simulations and then physical robot.
  7. We will have the final check now. The arm will capture the image of the drop area and detect its grids. Our final position of the arm will be then the centre of each grid which will change as the operation starts.

Corporating the variable distance case: (distance here refers to the distance between the pickup and drop area)

  • We have LIDAR attached to our bot which will do SLAM and make a 2D map of the environment and then we will convert this map into grids which will be further used for detecting the shortest path between the initial and the goal location by using A* binary search algorithm.
  • The arm will then put the identical objects one by one into the slave bot and then this bot will follow the master bot using the ROS packages. Once they reach the drop area, arm will then pick these objects from the slave bot and drop them.

12 of 12