Now that Tesla’s Full Self-Driving Supervised was approved for use in the Netherlands, the same reflex has popped up again from colleagues, acquaintances, and the usual armchair experts.
“Yes, but it is only a driver assistance system.”
“Yes, but Mercedes has Level 3.”
“Yes, but Tesla only uses cameras.”
Fine. Let us start with the part that is technically correct and still completely misses the point. Yes, Full Self-Driving Supervised is, in legal terms, a supervised driver assistance system. The driver remains responsible. The driver must be licensed, alert, sober, and capable of taking over at any time. That is the legal framework, and nobody serious should pretend otherwise. Even the Dutch approval makes that explicit. FSD Supervised is not legally classified as a self-driving car. The driver remains in control and remains responsible.
But this is exactly where far too many people stop thinking. They hear the legal category and assume it tells them everything about the technical capability. It does not.
The fact that a human must supervise the system does not mean the system is doing little. It means the law still requires a qualified human to monitor a system that is already capable of handling a remarkably broad range of driving tasks in the real world. That is a massive difference. Because Tesla FSD Supervised is not some glorified lane centering toy that keeps the car vaguely between white lines on a sunny highway and occasionally changes lanes if the stars align. It is designed to handle route navigation, steering, lane changes and parking under supervision, and in practice its relevance lies in the fact that it operates far beyond the tiny comfort zone in which many European people still imagine automated driving.
It deals with city traffic. It deals with suburban traffic. It deals with confusing mixed traffic. It deals with pedestrians, buses, cyclists, delivery vans, scooters, awkward junctions, narrow streets, strange markings, and all the little moments where driving stops being a sterile engineering demo and starts becoming real life. That is the part many critics still do not seem to understand.
A lot of them are still mentally stuck in the era of Tesla Autopilot in Europe. And yes, Tesla deserves some blame here. “Autopilot” was never a particularly helpful name. In Europe, for years, that system was basically a somewhat better lane keeping and adaptive cruise package. Useful, yes. Revolutionary, no. It also did not meaningfully evolve into what people now mean when they talk about FSD.
Yet every time the subject comes up, somebody drags out an old Autopilot anecdote, an old YouTube test, or some sensational TV segment that happily mixes up several completely different systems, several generations of software, several legal frameworks and several very different technical ambitions, then presents the whole thing as if it were one single coherent story. It is not. It is intellectual junk food. The difference between old European Autopilot and current FSD Supervised is not cosmetic. It is foundational.
And then we get the next ritual line:
“But Mercedes has Level 3.”
Again, fine. On paper (is it really?), in a narrowly defined use case, on certain highways, under perfect weather conditions, with certain speed restrictions, yes, the Mercedes Level 3 approval may look more advanced because the legal burden shifts further away from the driver in that specific scenario. But the paper category is not the whole story. The interesting question is not just how much liability can be handed over on a mapped highway at reduced speed in perfect conditions. The interesting question is what the system can actually handle across the messy, ugly, chaotic breadth of real traffic.
That is where Tesla becomes far more interesting. A system that can meaningfully navigate ordinary roads, urban situations, mixed traffic, awkward geometry and unpredictable behavior is solving a harder and more relevant problem than a system that looks cleaner in a legal taxonomy while living in a much smaller operational box.
And this leads directly to the next lazy objection:
Vision only. Cameras only. Therefore doomed.
No. That is far too simplistic. Tesla’s vision-only approach is not some cost-cutting gimmick accidentally dressed up as philosophy. It is a deliberate AI thesis. Driving is, at its core, a problem of understanding the world visually in real time. Humans drive primarily with eyes and brains. Tesla’s wager is that a machine can do that job better once it has enough visual coverage, enough data, enough compute, and enough training.
I have already written in detail about that in my earlier piece, A Case for Tesla Vision Only?, so I will not rehash the entire argument from scratch here. But the core idea remains straightforward: teach the system to interpret the world as it actually appears, live, in context, continuously, from a full surround visual perspective.
And no, the comparison with humans is not flattering for humans. We have two eyes. We get tired. We get distracted. We look at climate controls. We poke at touchscreens. We stare at smartphones like idiots. We read messages at traffic lights and pretend that somehow does not count. We miss bicycles. We overlook pedestrians. We daydream. We make assumptions too late. A machine does none of that. Tesla’s system watches continuously with a multitude of cameras. It does not blink. It does not get bored. It does not decide that now would be a fantastic time to fiddle with Spotify. Tesla itself frames this in exactly those terms, arguing that cameras do not blink, feel tired, or get distracted, while its fleet data and over-the-air updates keep improving the system over time. And this matters because many people still imagine driving assistance as if the main challenge were simply “seeing objects.” It is not. The real challenge is interpreting a live scene, understanding relationships, anticipating motion, and reacting appropriately in context.

That is why the “but what about bad weather?” objection is also less clever than people think. Humans do not drive in fog, rain or snow by magically having perfect visibility. We drive by combining incomplete visual input with context, memory, anticipation and caution. We infer where the road goes. We read the behavior of surrounding traffic. We adapt to reduced visibility by understanding the situation. A computer vision system can do the same kind of contextual anticipation, and in some respects potentially better. It can enhance input, sharpen it, brighten it, and process it consistently at a speed and with a level of concentration no human can match. Bad visibility is a challenge for any system, including humans. It is not an automatic proof that a vision-based architecture is fundamentally absurd.
Then there is the favorite engineer-sounding cliché:
More Sensors must be better.
Not necessarily. More sensors can also mean more complexity, more cost, and more opportunities for disagreement between sensor modalities. If different systems “see” the world differently, you do not automatically get truth. You can get conflict. You can get ambiguity. You can get a kind of perception split-brain problem that has to be reconciled somehow. And even then, some supposedly superior sensors still cannot do basic semantically rich tasks such as reading a sign or recognizing whether a traffic light is red or green without visual input.
So the real question is not whether you can bolt more hardware onto a car. Of course you can. The real question is whether you are building a coherent intelligence system that truly understands the environment and scales elegantly.

That scaling part is where Tesla’s approach becomes strategically brutal. Tesla’s autonomy story is software-defined. The core hardware is already broadly deployed across the fleet. That means capability can improve later by software update or subscription, including on cars that are already on the road. Tesla explicitly ties FSD improvements to over-the-air software updates, which is precisely why its cars can keep gaining functionality after delivery. That is a huge advantage.
With many traditional manufacturers, advanced automated features are tied to optional hardware chosen at the time of purchase. You configure them or you do not. If you did not tick the right box, too bad. If the next software generation arrives later, your older vehicle may simply be left behind. Retrofitting is often impractical, expensive, or impossible.
Tesla’s approach is the exact opposite. Put the necessary foundation into the fleet at scale, then improve the intelligence relentlessly. And that is the point many people still underestimate most: Tesla is not developing this like a classic automotive feature. Tesla is developing it like software. More specifically, like AI software. The cadence matters. The data matters. The feedback loop matters. The rate of improvement matters. Anyone who has watched what AI systems have done over the last two or three years should be very careful before confidently declaring that Tesla’s approach has “hit a wall.” We have seen AI systems go from interesting curiosities to tools that can write, reason, translate, code and synthesize information at a level that would have sounded ridiculous not that long ago. It would be bizarre to assume that machine perception and driving intelligence somehow live outside that broader trend.

That does not mean Tesla is infallible. It does not mean regulation is irrelevant. It does not mean supervision should disappear tomorrow. It does mean that the old dismissal has become lazy. The uncomfortable reality for many traditional car fans is this: Tesla may not yet be legally allowed to call the system autonomous in Europe, but technically it is already doing a very large share of what ordinary people actually mean when they talk about autonomous driving.
And that is precisely why this matters.
Not because the law is finished. Not because the technology is perfect. But because the direction of travel is now impossible to miss unless you actively want to miss it.
The old European picture of Tesla as the company with a badly named highway assistant while the real automotive grown-ups quietly lead the future has aged very badly. And it is aging worse by the month.
The Real Battle
And this points to the larger battle the German car industry still seems determined to misunderstand. Of course, more range is desirable. That is not the issue. The issue is balance. Range has to make sense in relation to weight, efficiency and cost. With current available and proven battery technology, stuffing cars with ever larger battery packs often produces something heavier, pricier and less efficient, not something smarter. Yes, future battery technologies will improve energy density, reduce weight and space requirements, and likely become cheaper over time. But that is precisely the point: The winning formula today is not maximum battery size at any cost, but the right range at the right price. And in a world where cars are becoming ever more computerized and software-defined, the old assumption that every vehicle will remain attractive and up to date for more than ten years is becoming harder to defend. These things are increasingly turning into smartphones on wheels. And who exactly still enjoys using a ten-year-old smartphone? That makes affordability more important than ever. The future will not be won by building ever heavier electric cars just to calm an outdated obsession with range, while making them more expensive, more complex and less relevant for the broader market.
Most people do not need oversized electric status symbols built for a tiny minority of long-distance road warriors. They need products that are efficient, affordable, easy to charge and simple to live with. Tesla understood that much earlier. It also understood that the real product is not just the car, but the whole system around it: Software, efficiency, charging, route planning and a user experience that removes friction instead of adding it.

And this is where the wider battle becomes a race against time. A car being “sold out” means very little if production is too slow, too complex or too inefficient to turn demand into real scale, real profit and real market presence. Waiting lists are not industrial strength. Monetising demand at scale is. Tesla understood that too. It is not only able to deliver cars quickly after ordering and book the revenue while demand is still hot. It has also built a product architecture in which the technical foundation for autonomous driving has been shipped broadly across its fleet for years. Since around 2019, millions of vehicles have left the factory already equipped with the necessary hardware, turning autonomy into a software problem rather than a hardware retrofit.
That changes everything. Because it means Tesla is not just selling cars. It is building an installed base that can be upgraded over time. Capabilities can be unlocked later via software updates or subscription models, allowing Tesla to generate revenue long after the initial vehicle sale. In theory, and increasingly in practice, this enables a scenario no other manufacturer can match: Activating new capabilities across millions of existing vehicles almost overnight and monetising them immediately.
At the same time, Tesla can offer vehicles that are competitively priced, technically future-ready, and still highly profitable. No complex option trees, no expensive hardware add-ons, no dependency on what a customer configured years ago. Just a scalable, software-defined platform. That is not just a better EV strategy. It is a fundamentally different industrial model.
And that is why too many legacy manufacturers, and too many of the journalists still cheering them on, increasingly sound like dinosaurs: proudly explaining the past while the asteroid in the sky is apparently somebody else’s problem.
Back to five cents.
//Alex