This is my submission for the Udacity Self-Driving Car Nanodegree Lane Detection Project. You can find my code in this Jupyter Notebook.
The pipeline takes a video of a street as input and outputs it with the lane markers highlighted.
The goals / steps of this project are the following:
My pipeline consisted of 5 steps:
In order to draw the left and right lane lines, I modified the draw_lines() function by
One potential shortcoming would be that the pipeline draws straight lines which obviously cannot follow curves.
Another shortcoming could be that the algorithm in this pipeline uses coordinates of previous lane lines in case it is not able to detect the lane lines from the current image. For a real self driving car, this would be very dangerous, because the car would still think that it “sees” lane lines even though it is already driven off the road.
A possible improvement would be to consider the position of the lane lines in previous images in order to validate and influence the calculation of the current results. Since the speed of the car is limited, we can assume that the street doesn’t change dramatically between two images. This could also trembling lane lines in the video.
Another improvement could be the integration of some algorithm that configures the parameters of the pipeline according to the quality and needs of the input video.