Tesla may be close to a European breakthrough in supervised autonomous driving. That is genuinely impressive. It also makes the company’s laughably stubborn routing errors look even more absurd.
For Tesla drivers in Europe, April 10 has started to feel like one of those dates that suddenly carries more weight than it should. If Tesla gets approval to bring Full Self-Driving (Supervised) to European roads, it will mark a genuinely important moment. This is not just about a car keeping itself in lane on a motorway. The real promise is far more ambitious: a vehicle that can handle city traffic, awkward intersections, cyclists, pedestrians, buses, trams, roadworks, and the countless small ambiguities that make real driving so messy. That, in fairness, is remarkable.
I have been driving Tesla for nearly three years, and I have followed the development of FSD closely over a long period of time. What Tesla appears to have achieved with its vision-only approach is genuinely impressive. While much of the industry preferred the safety blanket of piling on sensors, Tesla committed itself to teaching a machine to see through cameras, interpret space, identify objects, understand context, and anticipate what is likely to happen next. That is not some minor technical trick. It is a serious achievement.
If Tesla has really brought that system to a point where it can deal with dense urban traffic with a level of competence that rivals or exceeds many human drivers, then that deserves recognition. Human beings are not exactly flawless behind the wheel. They are distracted, impatient, careless, tired, and often busy doing anything except concentrating properly on the road. They stare at phones, fiddle with menus, skip songs, adjust climate settings, and drift through traffic on routine. A machine that pays constant attention and reacts quickly is not an insult to humanity. In many situations, it may simply be better.
That is why Tesla’s progress matters. I am convinced they are further ahead than anyone else in this field, and I also think the underlying technology is fundamentally viable for use on real roads.
Then comes the ridiculous Part
What makes the whole thing so fascinating is that the same company still manages to fail at things that are embarrassingly basic.
I see it in navigation. On one particular route, Tesla plans the outward journey correctly. On the return trip, however, it suddenly behaves as if a completely normal road segment cannot be used from the other direction. That is nonsense. The road is not blocked. It is not one-way. It is not a dead end. There is no obstacle. It is simply a normal road that can be driven in both directions.
Tesla navigation still wants to send me on a pointless detour.


That alone would just be a mapping error. Irritating, yes, but hardly dramatic. Digital maps are wrong all the time (even though Google Maps has always done it right here). What turns it into comedy is the contradiction with Tesla’s grand language about intelligence and learning. Because I ignore the detour. Repeatedly. I drive the valid route anyway, because I know and I can see perfectly well that the road is open. The system recalculates, proposes the same nonsense again, and still seems unable to absorb the obvious lesson.
What exactly is Tesla learning?
This is where the whole AI narrative starts to wobble. Tesla talks endlessly about neural networks, prediction, inference and fleet intelligence. Fine. The car is supposedly advanced enough to interpret complex traffic situations, anticipate the behaviour of pedestrians and cyclists, and react proactively to dynamic environments. That is precisely what makes the remaining stupidity so hard to ignore.
The car has cameras. It sees the road. It sees the traffic sign. It sees that there is no barrier. It sees that I drove through that section without incident. It sees, in other words, that its routing assumption was false. And yet nothing seems to change.
So the obvious question is this:
What exactly is Tesla learning, and where does that learning stop?
You cannot ask people to marvel at machine intelligence while shrugging off machine stupidity in the same product experience. If the car can supposedly cope with the chaos of urban traffic, how can it still fail to grasp that a perfectly open road is, in fact, open? Or at least to learn it after the fact? At some point the contrast becomes funny. One part of Tesla appears to be building the future of driving. Another seems unable to correct a mistake that reality has already disproved again and again, or what we humans mean by learning.
Respect, with raised Eyebrows
That is why the right reaction is neither blind celebration nor lazy dismissal. There is real progress here. Serious progress. If Tesla brings FSD (Supervised) to Europe, that will deserve genuine recognition. A camera-based system handling real traffic at this level would be a major technological achievement, and one that many critics dismissed as unrealistic for years.
But progress does not cancel absurdity. That is the Tesla paradox. The company may be leading in one of the most difficult and important areas of applied AI, while still exposing owners to moments of almost laughable incompetence in everyday navigation. It can make you think you are looking at the future, then immediately remind you that somewhere inside the same ecosystem, something is still painfully, stubbornly dumb.

And perhaps that is the most Tesla thing of all. The company is capable of producing genuine technological awe, then undercutting it with a piece of avoidable nonsense that no sensible driver can quite explain. It gives you a glimpse of what the future might look like, then sends you on a pointless detour because somewhere in its routing logic, reality still has not been processed.
The achievement is real. So is the absurdity.
You know what comes next:
As usual, just my five cents.
//Alex