When Confidence Drops: Predictive Safeguards for Lane-Keeping

Even when perception runs at high nominal accuracy, distribution shift (lighting changes, road wear, weather, occlusions) can silently erode performance. Raw model confidences alone are often misaligned under shift, and they do not capture geometric inconsistencies that precede control failures. RaspiCar (vehicle demonstrator platform) already computes several trust indicators -> lane coverage, width consistency, lane centre offset, connected component stability, model agreement, which correlate with geometry reliability and downstream control quality.

This thesis turns those indicators into operational predictions of near-term failure and couples them to concrete control policies. By calibrating model outputs and quantifying how trust signals relate to lateral error, lane loss, and imminent interventions across varied conditions (day/night, surfaces, occlusions), and by training a lightweight meta-classifier that fuses those signals with simple driving-behaviour features to estimate near-term failure probability, the work derives actionable thresholds that trigger slow-down, keep-lane fallback, safe stop, or controller switching, balancing false alarms against missed detections.

Thesis Type:

  • Master Thesis / Master Project

Goal and Tasks:

  • Build a calibrated, interpretable risk estimator from trust indicators and simple driving-behavior features that reliably predicts near-term lane-keeping failures across varied conditions
  • Deploy a policy module that converts predicted risk into online actions (slow-down, keep-lane fallback, safe stop, or controller switching) and demonstrate fewer interventions without hurting tracking accuracy
Demonstrator:
  • RaspiCar vehicle platform with modular ZeroMQ pipeline
  • Lane-keeping stack producing trust indicators
Method:
  • Calibrate model confidences; produce reliability diagrams and correlation analyses linking trust signals to lateral error, lane loss, and interventions
  • Train a lightweight meta-classifier to estimate failure probability within the next N meters/seconds from trust + behaviour features (steering variance, recent interventions)
  • Implement a real-time policy node that triggers graded actions by risk level; tune thresholds offline, then validate online
  • Evaluate with studies (policy off vs on) across day/night, surfaces, and occlusions; report handovers/km, lateral error, false-positive intervention rate

Recommended Prior Knowledge:

  • Experience with Raspberry Pi or similar embedded platforms
  • Programming skills (Python)
  • Basic familiarity with computer vision and basic state estimation
  • Basic data analysis

Start:

  • Flexible – ideally within the next 1–2 months / 6 months

Contact: