Tianao (Owen) Zeng

2026

Robust Perception for Autonomous Vehicles: Camera-Only vs. Camera+LiDAR Fusion

A CARLA closed-loop robustness study comparing YOLOv8n camera-only perception with RGB+LiDAR PointPainting-style fusion under weather, viewpoint, and adversarial stress.

  • multimodal perception
  • autonomous vehicles
  • sensor fusion
  • robustness

Overview

Built an episode-based CARLA evaluation pipeline where perception outputs directly drive throttle, brake, and steering, making camera and LiDAR failures affect the ego vehicle's actual behavior rather than only offline labels.

Problem

Camera-only AV perception is cheaper and simpler, while camera+LiDAR fusion is often assumed to be safer. The project tests that assumption under matched closed-loop scenarios spanning normal driving, adverse weather, viewpoint variation, camera glare, and LiDAR spoofing.

What I Built

  • A reproducible full-pipeline runner for paired camera-only and fusion episodes across Town03, Town04, and Town05 with matched seeds, traffic, and event windows.
  • A YOLOv8n RGB detector baseline and an RGB+LiDAR PointPainting-style fusion baseline using DeepLabV3-ResNet50 semantic segmentation and LiDAR confirmation.
  • Closed-loop perception-driven control via CARLA VehicleControl, including emergency braking, slow-down logic, target-speed tracking, and lane-following steering.
  • Attack and stress-test utilities for low visibility, viewpoint shifts, camera glare, and LiDAR phantom-obstacle injection, plus automated plots, tables, screenshots, and failure-case summaries.

Technical Stack

  • Python
  • CARLA
  • PyTorch
  • Ultralytics YOLOv8n
  • DeepLabV3-ResNet50
  • LiDAR point clouds
  • OpenCV
  • NumPy
  • Pandas
  • Matplotlib

Results / Outcomes

  • Fusion reduced missed-stop rate in every scenario category, including normal conditions, adverse weather, viewpoint variation, and adversarial settings.
  • Under paired attack evaluation, camera-only had a 0.213 aggregate decision-change rate, while fusion had no aggregate controller decision changes in the reported paired runs.
  • Fusion improved adverse-weather recall and reduced safety-critical misses, but increased false-stop behavior, revealing a safety-conservatism tradeoff rather than a uniform win.
  • The report ties implementation files directly to claims, including orchestration, YOLO detection, semantic fusion, attack injection, closed-loop control, and final experiment configuration.