Cutting corners: Tesla Autopilot is possibly the most advanced self-driving system available to the public, but Tesla’s ambition has come with a considerable amount of risk. After a Model X steered into a lane divider causing a fatal accident last year, Tesla ‘fixed’ the problem with a subsequent software update, but the issue is back.

Reddit user Beastpilot drives down a freeway in Seattle as part of his afternoon commute. Last year, only days after a fatal accident in similar circumstances, BP noticed that his own Model X was steering towards the lane divider separating the freeway from a carpool lane that veers off to the left. Speaking to Ars Technica, he described the car as acting as if the divider was an empty lane.

In light of the then-recent tragedy, he notified Tesla immediately. They didn’t respond, but after several weeks, a new update rolled out and the issue stopped. Come October last year, however, BP’s Model X began steering towards the lane divider again. Once again he notified Tesla to no avail, but once again an update rolled out a few weeks later and the issue disappeared.

Continuing to enjoy the Tesla experience despite the Autopilot issues, BP picked up a Model 3. It didn’t have any troubles until the 2019.5.15 update rolled out earlier this month. You can see the issue returning quite clearly in BP’s Reddit post below: the car follows the lane rightwards until it suddenly veers to the left and into the lane divider.

It's BACK! After 6 months of working fine, 2019.5.15 drives at barriers again from r/teslamotors

This is exactly what happened to Walter Huang last year, only when the car shifted left into the lane divider, his hands weren’t on the wheel and the car hit a concrete wall front on at 70 mph. By that time Teslas had passed that exact same stretch of road 85,000 times on Autopilot, which had probably lulled Huang into a false sense of security. But just because the system works for the first 100,000 times, that apparently doesn’t mean that a software update can’t add a fatal flaw.

One way of making sure old errors aren’t built back into the system – remember, the code is written largely by a neural network – is by checking for every single reported error every single update using simulations. A 3D model of an intersection or road is presented to the software, which must decide what to do and where to go. If it collides with a virtual car or leaves its lane then it’s back to the drawing board. If it passes through all these simulations without a hitch, then it gets downloaded to the cars.

For most companies, this is easy because most self-driving cars rely on Lidar-based 3D maps as their real-time information source. Teslas aren’t equipped with Lidar and only use cameras and radar, and thus don’t use 3D maps to guide themselves. This means that 3D maps can’t be presented to the cars directly, they must be translated into complex footage and radar readings. Tesla refuses to say how regularly they put the effort in to do this, if at all.

Tesla’s plan is to use their billions of miles worth of footage, radar readings and driver’s decisions to create a neural network that is so well informed it must be reliable. Or at least that’s the marketing pitch. In reality, Teslas only consume a few hundred megabytes of data per day at peak, so only a fraction of those miles actually get sent to the supercomputers.

Tesla’s solution to their technical problem is to shift the issue onto the consumer, who must always keep their hands on the wheel and their mind on the traffic, according to their terms of service. Unfortunately, that’s directly against human nature, and drivers do just zone out. With Autopilot on, you can’t trust the car, and you can’t trust the driver. Perhaps it’s better to leave Autopilot off for the time being.