Robust Perception at the Edge: Physical Patch Attacks and Scene Context

Recent studies show that physical adversarial patches (high-contrast stickers or patterns placed on the road or nearby surfaces) can reliably perturb lane segmentation and steering, even when the vehicle is otherwise operating normally. On resource-limited platforms, heavyweight defences are impractical, and naive image filters can degrade nominal performance. At the same time, purely lane-centric pipelines struggle when lanes are partially occluded by vehicles, cones, or pedestrians, lacking object-level context, they may over-trust spurious edges.

This thesis delivers a pragmatic, embedded-friendly robustness strategy with two components. It defines a safe, reproducible physical attack protocol for RaspiCar and benchmarks the existing lane stack under benign and attacked conditions. Furthermore, it implements and tunes low-cost defences, measuring both robustness gains and runtime overhead on-device. To address occlusions and contextual ambiguity, the thesis integrates a TPU-friendly YOLO detector and fuses object cues with lane predictions to down-weight occluded/implausible lane segments and add an “environment confidence” signal for the controller.

The result is the end-to-end characterisation of physical robustness on RaspiCar, paired with deployable defences and scene context that strengthen reliability without violating real-time constraints.

Thesis Type:

  • Master Thesis / Master Project

Goal and Tasks:

  • Establish a physical patch attack protocol on RaspiCar and deploy lightweight on-device defences that keep robustness high without breaking real-time performance
  • Integrate YOLO-based scene context and fuse it with lane perception to reduce interventions during occlusions while preserving nominal driving quality
Demonstrator:
  • RaspiCar (Raspberry Pi + Coral/accelerator), camera, existing lane stack, modular ZeroMQ pipeline
  • Available attacks/defences baseline
Method:
  • Design a safe, reproducible physical-attack evaluation (print files, placements, procedures)
  • Implement and tune low-cost defences and measure runtime overhead on-device
  • Integrate an edge-friendly YOLO detector; publish detections and fuse with lane outputs to down-weight occluded/implausible segments and emit an environment-confidence signal
  • Conduct experiments (baseline vs defence; with/without scene fusion) under benign, occluded, and adversarial conditions; report attack success, lateral error, handovers/km, FPS/latency

Recommended Prior Knowledge:

  • Experience with Raspberry Pi or similar embedded platforms
  • Programming skills (Python)
  • Basic familiarity with computer vision and basic state estimation
  • Basic data analysis

Start:

  • Flexible – ideally within the next 1–2 months / 6 months

Contact: