Skip to content

Train and Deploy an MLP Steering Policy

Overview

Use the dataset collected from manual laps to train a small multilayer perceptron (MLP) that predicts steering angles from camera images. Deploy the trained network on the car and evaluate closed-loop behavior.

Prerequisites

  • Completed Data Collection for Imitation Learning
  • Python training environment with PyTorch ≥ 2.1 (uv pip install torch torchvision)
  • Access to recorded bag converted into image/label pairs
  • ROS 2 package fri_ml_control for deployment

Learning Objectives

  • Students can train and deploy a neural network policy onto their car and have it drive autonomously.
  • Convert rosbag data into a supervised learning dataset
  • Implement and train a simple MLP for regression
  • Export the model for real-time inference on an embedded GPU
  • Validate autonomy on a controlled track with safety monitoring

1. Prepare the Dataset

  1. Extract images and steering commands:
    ros2 run fri_data_tools bag_to_csv --bag 2025-10-22-track-a --topic /drive > drive.csv
    ros2 run fri_data_tools bag_to_images --bag 2025-10-22-track-a --topic /camera/image_rect --out data/images
    
  2. Align timestamps via interpolation in a Python notebook.
  3. Normalize steering angles to radians and scale to [-1, 1].
  4. Split dataset: 80% train, 10% val, 10% test.

2. Define the Model

Create train_mlp.py:

import torch
import torch.nn as nn

class SteeringMLP(nn.Module):
    def __init__(self, input_dim=1200, hidden=[512, 256]):
        super().__init__()
        layers = []
        prev = input_dim
        for width in hidden:
            layers += [nn.Linear(prev, width), nn.ReLU()]
            prev = width
        layers.append(nn.Linear(prev, 1))
        self.net = nn.Sequential(*layers)

    def forward(self, x):
        return self.net(x)

Flatten grayscale images (e.g., 40×30 ROI) into input_dim features.

3. Training Loop

from torch.utils.data import DataLoader
import torch.optim as optim

def train(model, train_ds, val_ds, epochs=30, lr=1e-4):
    opt = optim.Adam(model.parameters(), lr=lr)
    loss_fn = nn.MSELoss()
    best_val = float('inf')
    for epoch in range(epochs):
        model.train()
        for x, y in DataLoader(train_ds, batch_size=128, shuffle=True):
            opt.zero_grad()
            pred = model(x)
            loss = loss_fn(pred.squeeze(), y)
            loss.backward()
            opt.step()
        val_loss = evaluate(model, val_ds, loss_fn)
        if val_loss < best_val:
            best_val = val_loss
            torch.save(model.state_dict(), 'artifacts/mlp_best.pt')
            print(f"epoch {epoch}: val {val_loss:.4f}")

Use early stopping and log TensorBoard metrics.

4. Export for Deployment

  1. Convert to TorchScript:
    model.load_state_dict(torch.load('artifacts/mlp_best.pt'))
    model.eval()
    scripted = torch.jit.script(model)
    scripted.save('artifacts/mlp_steering.ts')
    
  2. Copy mlp_steering.ts to the car (scp artifacts/mlp_steering.ts fri@car-01:~/models/).

5. Real-Time Inference Node

Implement mlp_inference_node.py:

import torch
from cv_bridge import CvBridge
import cv2
from ackermann_msgs.msg import AckermannDriveStamped

class MLPDriver(Node):
    def __init__(self):
        super().__init__('mlp_driver')
        self.bridge = CvBridge()
        self.model = torch.jit.load('~/models/mlp_steering.ts').to('cuda')
        self.declare_parameter('speed', 1.5)
        self.sub = self.create_subscription(Image, '/camera/image_rect', self.cb, 10)
        self.pub = self.create_publisher(AckermannDriveStamped, '/drive', 10)

    def cb(self, msg):
        img = self.bridge.imgmsg_to_cv2(msg, 'mono8')
        roi = cv2.resize(img[240:480, :], (40, 30)).flatten() / 255.0
        tensor = torch.from_numpy(roi).float().to('cuda')
        steer = float(self.model(tensor).cpu())
        cmd = AckermannDriveStamped()
        cmd.drive.speed = self.get_parameter('speed').value
        cmd.drive.steering_angle = steer
        self.pub.publish(cmd)

Add watchdog timers and manual override checks as needed.

6. Closed-Loop Testing

  • Use a safety driver ready to switch to manual control.
  • Begin at low speed on a straight segment; expand to full lap after verifying stable behavior.
  • Log /drive, /lane_error, and /camera/image_rect for post-run analysis.

7. Evaluate Performance

  • Compute steering RMSE vs. human commands on held-out laps.
  • Review failure cases (sun glare, shadows) and mark them for future data augmentation.
  • Consider pruning or quantizing the model if GPU usage is high.

Wrap-Up

Store the trained model, training scripts, and experiment metadata in version control or the experiment tracking system. Prepare to explore reinforcement learning in simulation next.

Next Tutorial: Finish with Reinforcement Learning for Lane Following.