Skip to content

Camera Calibration and Lidar Projection

In this tutorial, you will calibrate the onboard camera, undistort captured imagery, and overlay lidar points onto the rectified view. The calibration lets you fuse vision and range data for perception tasks like obstacle detection.

Background

Camera calibration has two parts, intrinsic calibration and extrinsic calibration. Intrinsic calibration is the process of estimating the parameters of the camera matrix. The camera matrix includes focal length, optical center, and distortion coefficients. This information is essential for correcting image distortion. The second part, extrinsic calibration is the process of identifying the relative positions of sensors in order to accurately fuse information from those sensors. In this tutorial, we will map 3D points from the Hokuyo LIDAR, into the camera's image plane.

Prerequisites

Learning Objectives

  • Students can process and combine data from the camera and laser scanner
  • Capture a calibration dataset
  • Compute intrinsic and distortion coefficients for the camera
  • Generate undistorted images using ROS 2 image pipelines
  • Transform lidar points into the camera optical frame using known extrinsics

Deliverables

  • Camera calibration files saved under ~/roboracer_ws/params/cameras/
  • A video of the live visualization of projected lidar points overlaid on rectified images

Tutorial Steps

1. Prepare the Calibration Board

  1. Download the 8×6 checkerboard with 25 mm squares: Checkerboard-A4-25mm-8x6.pdf. If you generate a custom target, ensure the dimensions match the calibration script configuration.
  2. Print at 100% scale and secure the sheet to a rigid, flat surface such as a book or clipboard so it stays perfectly planar during data collection.

2. Launch the Calibration Session

  1. Power on the car and confirm the camera is connected to the desired Orin cam0 port.
  2. Start the publisher and calibration GUI using the tmuxinator profile:
    cd ~/roboracer_ws/tmux/camera_calibration/mono/
    SENSOR_ID=0 tmuxinator
    
    Use SENSOR_ID=0 for CAM0 or SENSOR_ID=1 for CAM1 to match the physical connector you are calibrating.

3. Collect Calibration Views

  1. Watch the calibration GUI and move the checkerboard across the field of view.
  2. Tilt the board, vary its distance, and reach all image corners. The progress bars above the Calibrate button will turn green once enough diverse samples are captured.
  3. Maintain even lighting and avoid motion blur; pause briefly at each pose so the detector locks onto the corners.

4. Compute and Persist Intrinsics

  1. Click Calibrate and wait several minutes for the solver to finish.
  2. When the results appear, copy the reported matrices directly into ~/roboracer_ws/params/cameras/0/calibration.txt (or .../1/ for CAM1). A representative output looks like:
    D = [0.06662380191585063, 0.178926075555916, 0.0065107662877980045, -0.0024677565939998894, 0.0]
    K = [2618.251997383138, 0.0, 961.088272919927, 0.0, 2614.6803126553446, 600.0655148190227, 0.0, 0.0, 1.0]
    R = [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
    P = [2663.482421875, 0.0, 957.9318937059143, 0.0, 0.0, 2658.449462890625, 603.11407205879, 0.0, 0.0, 0.0, 1.0, 0.0]
    
  3. Commit calibration files to version control so the intrinsics travel with the project.

5. Calibrate Additional Cameras

If the platform has a second camera, repeat steps 2–4 with SENSOR_ID=1 and save the output under ~/roboracer_ws/params/cameras/1/calibration.txt.

6. Publish Rectified Images

  1. Update the camera launch file to load the new calibration:
    # fri_bringup/config/camera.yaml
    camera:
      ros__parameters:
        camera_info_url: package://fri_bringup/config/camera/camera_info.yaml
    
  2. Launch the rectification pipeline:
    ros2 launch image_proc image_proc.launch.py
    
  3. Confirm /camera/image_rect exists and looks undistorted in Foxglove.

7. Determine Camera and Lidar Extrinsics

  1. Measure the static transform between the lidar and camera frames, or use an optimization tool like lidar_camera_calib.
  2. Publish the transform via a static broadcaster:
    ros2 run tf2_ros static_transform_publisher 0.10 0.0 0.15 0 0 0 /camera_link /laser
    
  3. Validate the TF using ros2 run tf2_tools view_frames to ensure the tree is connected.

8. Project Lidar into the Image

  1. Run the projection node:
    ros2 run fri_perception lidar_to_camera_projection --ros-args -p point_topic:=/scan -p image_topic:=/camera/image_rect
    
  2. Watch the /camera/overlay topic in Foxglove. Points should align with physical obstacles.
  3. Adjust color mapping and clipping distance in the node parameters for clarity.

9. Sanity Checks

  • Move a flat board through the scene and confirm projected points trace its edges.
  • Check latency; overlays should follow live motion without noticeable lag.
  • Log metrics (projection error, number of points) for future comparisons.

Wrap-Up

Store calibration artifacts (camera_info.yaml, extrinsic transform) in version control and note the environment conditions used. Recalibrate whenever optics or mounting change.