Exclusive: Meet Aussie Daniel Ho, Tesla’s Full Self Driving guru. He loves it. But Corby’s afraid … very afraid
For a long time, I have tried to deny the possibility that cars will ever become fully autonomous, but two recent experiences have led me to believe, or accept, that the future of driving is not driving at all. Indeed, I think you can expect to see headlines proclaiming that Tesla’s Full Self Driving systems have had their “ChatGPT” moment very soon.
The first experience was last year when I took a ride in a fully autonomous EV taxi on the streets of San Francisco, where I was surrounded by other self-driving taxis run by two separate companies, no less, Waymo and Cruise. At that moment, I had to accept that, within a few years, you will never have to speak to an Uber driver, or smell a taxi driver, again if you don’t want to.
And then, last week I sat down with senior Tesla exec and Director, Vehicle Programs and New Product Introduction, Daniel Ho, who told me, in detail, just how fast his company’s Autopilot programs were progressing, now that AI-powered neural networks have taken over the thinking parts, and that he uses Full Self Driving all the time, because it’s already a better driver than he is.
READ MORE: Tesla Model 2 abandoned? Multiple sources say $25,000 electric car dead, Elon says it’s all lies
READ MORE: Tesla Model Y cheaper than ever! Top selling electric car slashed up to $8500; could it outsell the Ford Ranger?
READ MORE: 2024 Tesla Model 3 Long Range review: 331 reasons the dual motor is worth the extra spend
“The big shift has come from using neural networks, AI that is set up to learn much like a human brain, and we’re also just throwing massive amounts of compute at it, and what I mean by that is just the sheer processing power, the clusters of GPUs, it’s like a massive attack on the problem, using massive instillations of computers, and that just allows us to process so much more information, so much more quickly,” Daniel, who grew up in Melbourne and worked at Ford before signing up to work closely with Elon Musk more than decade ago, explained.
“The thing about AI is that it moves so fast, and every now and then you have these moments, what you might refer to as ChatGPT moments, where there’s this inflection point where a technology goes from being ‘okay’ to ‘insanely good’.
“And we are going to get to that moment, it’s it’s hard to say how far away that point is because it can just accelerate so quickly.
“Even just the progress the team has made, if I look at FSD three months ago to where it is today, it’s made such a huge improvement in that time, just incredible.
“I was speaking to one of my guys about how effective this recent work has been, he said it’s not even a case of FSD being faster in terms of iteration speed, he said the best way to describe the difference is that it’s gone from being impossible, to possible.”
While Ho said he would describe the current versions of FSD being used by the public as somewhere between Level 2 and Level 3 autonomy, a jump to full Level 3 (“hands off but eyes on”) and then Level 4 (“hands off, eyes off, read your emails, the driver is no longer responsible”) is not far away.
While he’s a driving enthusiast, Daniel says he’s come to accept that he, and everyone else, is safer if he lets his Tesla drive for him.
“As Elon has said, ‘it’s going to save lives if you get it right,’ and it will, because it’ll be better than the average driver, who is terrible, and it’s already, well, yes, it’s better than me,” Ho says, somewhat reluctantly.
“My wife and I were talking about this, the fact that actually when I’m on FSD, it’s definitely safer than when I’m driving. I would say I’m a fairly confident driver, but, I take risks, risks that the FSD will not take.
“It’s driving at at the speed limit, or a set speed; I don’t. It’s driving at a distance away from the car in front that is a safe distance to stop; I don’t. Or there will be a small gap in the traffic that I think I can make, but it’s risky; FSD wouldn’t do that.
“So when you sit back and you’re in FSD. I actually feel as though I’m in a safer state than if I was driving.
“This last weekend, I was pretty much running FSD everywhere my wife and I went, you know, picking up groceries, running errands, whatever, I was just using FSD the entire way. And the only time I was intervening, at all, was for potholes.
“So the FSD can’t see those yet, but that’s something that we’ll solve over time, because I mean if we can see it as humans, the neural network will eventually be able to see it.”
Daniel says that he still likes to take over and drive when he’s on a nice bit of road, but the point of FSD is that it can take over the boring parts of driving that you don’t want to do, crawling home in traffic through the Bay Area of San Francisco as he does every day, for example.
“When I leave work,And I’m not in the mood for driving because I want to start my decompression, the wind down phase of my day, so that when I get home I’m a little bit more relaxed, so I’m on FSD from the moment that I leave the office, and I’m able then to start to get back a bit more cognitive load,” Daniel explained.
“I still drive the roads that I want to drive, and in the conditions that I want to drive, but when I’m not interested in driving the car will do it for me.”
Daniel did admit that FSD is still not as smooth as he is, and that he’s better at predicting the driver behaviours – we agreed that there’s just a sense you develop of being able to spot a bad, or careless, driver in your vicinity and avoiding them. But he says even that is something machines will get better at.
“As a human you feel like sometimes you can see the way a car is behaving, and you know that’s a bad driver, and you know to avoid that driver, but if we can codify that, if we can figure out, what is it about the way that car drove that makes me feel nervous? For example, is it the distance difference? Do they follow the car in front too closely? All that stuff can be modelled and understood.
“So just as we have that intuition, we can model that eventually and autonomous cars will get better at it.”
Daniel says he’s heard the argument that we shouldn’t be using autonomous cars until they are perfect – and Tesla’s Autopilot has been involved in fatal accidents – but he points out that if we held human drivers to the same standard, “no one would be allowed to drive”.
“So what threshold is it where a human or a machine is so much better than a human? It’s not perfect yet, but it’s so much better than a human, already,” he insists.
“It’s actually statistically, scientifically rational for the machine to drive. And the more machines there are driving at the same time, the less human error there will be, over all.
“But I like a world where there’s a combination of both. There’s a driver. There are times when I want to drive and there are times when the machine shouldn’t be driving. I don’t want to take driving away from us.”
And yet… consider our safety-loving authorities and how tempting it would be to ban humans from driving altogether, if the statistics suggest the world would be safer that way. It’s scary. And it’s really going to happen. Faster than I feared.