1 of 36

Autoware Mini Lecture 7:�Traffic light detection

Tambet Matiisen

26th March 2024

2 of 36

Architecture

Localization

Global planner

Map�Destination

Object�detection

Lidar

RadarCamera

Local planner

current pose + speed

detected objects

global path

Controller

local path

Traffic light detection

Map

API�Camera

stopline status

Steering�Speed�Turn signals

Map

GNSS�Lidar�Camera

3 of 36

Traffic light detection

Inputs:

  • map
  • API
  • camera

Outputs:

  • /detection/traffic_light_status (autoware_msgs/TrafficLightResultArray)

Traffic light detection

Map

API�Camera

stopline status

4 of 36

Traffic light detection

Camera-based

API-based

Fusion

5 of 36

01 Camera-based

02 API-based

03 Fusion

6 of 36

Camera-based

01

7 of 36

  1. How many traffic lights are there on this image?
  2. Which of them apply to us?
  3. How do we understand this?

8 of 36

Representing traffic lights on the map

9 of 36

Camera-based traffic light detection process

Project traffic �light location to camera image

Crop out �traffic light

Classify cropped out image

red

yellow

green

unknown

10 of 36

Calculating the location of the traffic light w.r.t. the camera

x,y

z

camera_fl

map

11 of 36

Question

source ~/autoware_mini_ws/devel/setup.bash�roslaunch autoware_mini start_bag.launch

rosrun rqt_tf_tree rqt_tf_tree

  1. What frames/transforms are involved in converting a point in map frame into camera_fl frame?
  2. Which transform is the most unreliable?

12 of 36

Frames/transforms involved in figuring out traffic light location

x,y

z

lidar_center

base_link

camera_fl

map

Produced by localizer�Reliability: low

Measured manually�Reliability: high

Calibrated, manual fine-tuning�Reliability: medium

13 of 36

Looking up and applying the transform

transform = self.tf_buffer.lookup_transform(‘camera_fl’, ‘map’, img_msg.header.stamp)

point_map = PointStamped(point=Point(float(x), float(y), float(z)))

point_camera = do_transform_point(point_map, transform).point

14 of 36

Pinhole camera model

O is the focal point of the camera

X,Y,Z represent the axes in camera optical frame, respective coordinates are (x,y,z)

U,V represent the axes in image plane, respective coordinates are (u, v)

R is the image center, f is focal length

Q is the projection of P onto image plane

U

V

X

Y

Z

x

y

z

15 of 36

Pinhole camera model

Due to similar triangles:��

Similarly:��

f - focal length of the camera

U

-u

X

Z

x

z

16 of 36

Question

What are the pixel coordinates of a point (1, 2, 10) in a camera optical frame given that

  • camera focal length is 8mm,
  • camera sensor is 2064 (H) × 1544 (V) pixels,
  • pixel size is 3.45 μm (H) × 3.45 μm (V),
  • origin of pixel coordinates is the top left corner?

17 of 36

Camera intrinsic matrix

Compact way to represent pinhole camera coordinate conversions

fx,fy - focal length in pixels (can be different due to multitude of reasons)

cx, cy - center of the image in pixels

uz, vz - pixel coordinates, need to be normalized by dividing with z

18 of 36

Question

Open ~/autoware_mini_ws/src/vehicle_platform/config/calib/camera_fr.yaml

This is a config file for the same camera in previous question.

  1. Find camera matrix
  2. Where do the fx, fy numbers come from?
  3. Why center of the image is not exactly in the center?

19 of 36

Camera distortions

20 of 36

Rectification

Unrectified image

Rectified image

21 of 36

Projection of 3D coordinates to 2D plane

camera_model = PinholeCameraModel()

camera_model.fromCameraInfo(camera_info_msg)

camera_model.rectifyImage(image, image)

u, v = self.camera_model.project3dToPixel((point_camera.x, point_camera.y, point_camera.z))

22 of 36

Classification of the traffic light

  1. Open https://netron.app/
  2. Upload ~/autoware_mini_ws/src/autoware_mini/config/traffic_lights/tlr_model.onnx
  3. What is the size of input image?
  4. How many convolutional layers are involved?
  5. What is the number of outputs?

23 of 36

Camera-based traffic light detection process

Project traffic �light location to camera image

Crop out �traffic light

Classify cropped out image

red

yellow

green

unknown

24 of 36

Camera-based detection in Autoware Mini

source ~/autoware_mini_ws/devel/setup.bash�roslaunch autoware_mini start_bag.launch

  1. Enable Detection->Traffic lights->Left ROI image and Right ROI image
  2. Observe the precision of the bounding boxes
  3. Observe the accuracy of the predictions
  4. How much GPU memory is taken by traffic light classification? Use nvidia-smi

25 of 36

Problems with camera-based detection

26 of 36

API-based

02

27 of 36

API-based detection

MQTT Broker

Traffic lights publish their status over authenticated TLS connection

Cars subscribe to the status of traffic lights on their path over anonymous TLS connection

28 of 36

MQTT Explorer

sudo snap install mqtt-explorer�mqtt-explorer

  1. Setup:
    1. Host: mqtt.cloud.ut.ee
    2. Port: 8883
    3. Enable Validate certificate and Encryption (tls)
  2. What is the current state of Town Hall Square traffic light?
    • Open ~/autoware_mini_ws/src/autoware_mini/data/maps/tartu_demo.osm in JOSM, click on stopline element and look for api_id attribute
    • Look up the topic in MQTT Explorer

29 of 36

Everything is traffic light?

Pedestrian crossing, bus stop, roundabout publish if it is safe to drive.

MQTT Broker

30 of 36

API-based detection in Autoware Mini

source ~/autoware_mini_ws/devel/setup.bash�roslaunch autoware_mini start_sim.launch tfl_detector=mqtt

  • The stopline status comes live from traffic light controllers
  • Check if the status of the very same stopline matches with MQTT

31 of 36

Fusion

03

32 of 36

Traffic light detection

Left camera detection

Majority merger

MQTT detection

autoware_msgs/TrafficLightResultArray

Right camera detection

Priority merger

33 of 36

Majority merger in Autoware Mini

source ~/autoware_mini_ws/devel/setup.bash�roslaunch autoware_mini start_bag.launch

  1. rostopic echo /detection/traffic_light_status
  2. rostopic echo /detection/camera_fl/traffic_light_status
  3. rostopic echo /detection/camera_fr/traffic_light_status

34 of 36

Plotjuggler

sudo apt install ros-noetic-plotjuggler�rosrun plotjuggler plotjuggler

  1. Run roslaunch autoware_mini start_bag.launch
  2. Click on Start under ROS Topic Subscriber
  3. Choose topics:
    1. /detection/traffic_light_status
    2. /detection/camera_fl/traffic_light_status
    3. /detection/camera_fr/traffic_light_status
  4. From each topic drag all results[n]/recognition_result to plot view

35 of 36

Key takeaways

  1. Detecting traffic lights is not enough, you also need to know if they apply to you
  2. HD map can be used to record which traffic light applies to which lane (stopline)
  3. Camera-based traffic light detection is finicky and unreliable
  4. Direct communication of traffic light status is much more reliable
  5. Majority voting is a common fusion scheme

36 of 36

Thank you!

Tambet Matiisen

UT Autonomous Driving Lab technology lead

tambet.matiisen@ut.ee

�Autonomous Driving Lab�University of Tartu Institute of Computer Science

Narva mnt 18, 51009 Tartu �

adl.cs.ut.ee