Camera-only lane perception is vulnerable to glare, shadows, motion blur, occlusions, and visual artifacts. These problems show up as shaky lane geometry, false lane edges, and sudden curvature spikes that drive unnecessary steering corrections or trigger safety handovers. RaspiCar (vehicle demonstrator platform) already carries additional, lightweight sensors -> IMU providing yaw-rate and a forward-facing ToF range sensor that observes complementary aspects of the scene and the vehicle’s motion.
This thesis investigates how to combine these signals into a computationally light, real-time fusion layer that stabilises lane estimates and prevents collisions without exceeding the tight latency/FPS budget. The core idea is to:
- use IMU to regularise the temporal evolution of lane heading, reducing frame-to-frame noise
- use ToF as a feasibility gate that rejects lane-following decisions which would steer into close obstacles.
Because the platform is resource-constrained (Raspberry Pi + accelerator) and runs a modular ZeroMQ pipeline, designs must degrade gracefully when any sensor drops out. The resulting fusion layer becomes the data spine for later work on trust-aware decision-making and robustness.
Thesis Type:
- Master Thesis / Master Project
Goal and Tasks:
- Build a synchronised multi-sensor pipeline and dataset (camera, IMU, ToF, vehicle commands, trust indicators) and implement a computationally light fusion layer that runs in real time
- Demonstrate improved lane-keeping quality and fewer safety handovers across different conditions without exceeding the latency/FPS budget
Demonstrator:
- RaspiCar vehicle platform (Raspberry Pi + accelerator) with camera, IMU, and ToF
- Modular ZeroMQ-based perception→decision→control stack
Method:
- Implement two complementary fusion baselines:
- Temporal lane stabiliser (for example, EKF combining lane heading/curvature with IMU yaw-rate)
- Feasibility/obstacle gating using ToF to veto unsafe lane proposals
- Maintain real-time performance through profiling and zero-copy messaging
- Run controlled track experiments (camera-only vs fusion) and report E2E lateral deviation, handovers per km, lane-loss events, and FPS/latency
Recommended Prior Knowledge:
- Experience with Raspberry Pi or similar embedded platforms
- Programming skills (Python)
- Basic familiarity with computer vision and basic state estimation
- Basic data analysis
Start:
- Flexible – ideally within the next 1–2 months / 6 months
Contact: