I was struck by a sentence in a book on Artificial Intelligence, How AI Thinks, by tech entrepreneur Nigel Toon. He wrote that a Level 3 autonomous car had been launched in 2023 by Mercedes which in fact was limited to operating in Nevada. Level 3 is basically a car that can largely drive itself but still requires the ‘operator’, or rather driver, to permanently maintain attention in case the autonomous control malfunctions.
Toon did not mention that Level 3 is actually deeply problematic. Humans are good at controlling and running things, but they soon get bored when they are supposed to be supervising something. Their mind wanders, they get distracted, or even, as has happened in tests, fall asleep. Level 3 has consequently been dropped by some manufacturers and developers. But Toon did not mention any of this and instead casually wrote that: ‘It won’t be long, though, before we have fully autonomous “Level 5” cars which can operate without any assistance.’
This infuriated me, going so far as making me want to bin the book, which I would have done had it not been a present from my daughter. Level 5 is the holy grail of the driverless-car proponents. It means that the car could drive anywhere, in all weathers and on every type of road, with no intervention whatsoever from its passengers or a remote controller. It has been talked about for 20 years, ever since the driverless car tech race was triggered by a competition launched by the US Department of Defense. We are nowhere near achieving it and, as I have consistently maintained, there are insuperable difficulties regarding safety, security and consumer choice. (I won’t rehearse these again here.)
Toon’s argument is, essentially, that AI will solve all the remaining problems involved in making cars fully autonomous. The autonomous car enthusiasts, led by Google subsidiary Waymo, argue that we are nearly there. Look at the robotaxis in San Francisco, Phoenix and elsewhere, including China, they say. But there is a long road between cars that can travel in well-mapped urban areas with unknown levels of remote supervision – the tech companies do not disclose precisely how ‘autonomous’ these cars really are – to full Level 5 ubiquity.
Toon implies that AI will provide the key upgrade that will allow full autonomy. I have recently started using AI to help in my research for my new book on high speed rail, as well as my various articles, and have discovered just how error-strewn it is. Even in my limited use, I have come across numerous mistakes. I once asked a question about the desirability of an immediate election and it told me that the latest possible date for a UK general election would be January 2025. The only trouble was it told me this in July 2025 and did not seem to know that there had been one in July 2024. On another occasion it gave me correct information about an inventor of a new type of train but got his birth and death dates wrong by 30 years. There have been other errors in the AI answers I have been given, although of course there is much useful information as well.
My guru on these matters is Professor John Naughton, who writes a weekly column in the Observer about tech issues with a sceptical but well-informed eye. In a recent column , he looked at the launch of the latest version of Chat-GPT which claimed to be the next step towards ‘artificial general intelligence’ and found it wanting and still full of errors. He quoted business-school professor Ethan Mollick, an experienced user of these tools, as saying that: ‘GPT-5 does stuff, often extraordinary stuff, sometimes weird stuff, sometimes very AI stuff, on its own.’
Naughton argues that the very term AI is a misnomer because these are merely large language models which rely on being stuffed with vast quantities of data from which they derive their answers. As he puts it, ‘It’s never going to morph into Einstein.’ As demonstrated by the error made in my question about elections, these large language models have no real knowledge of the world around them but are simply programs which provide answers that are, at times garbled, responses to the questions they have been asked.
Therefore, to argue as Toon does, that AI will ultimately lead to that nirvana of Level 5 driverless cars is tendentious in the extreme. Just as there are certain basic problems with Level 5 (my favourite is what happens when two driverless cars meet on a one-lane country road but there are several others), there are fundamental issues with AI which appear impossible to resolve since it will always rely on data that itself is not necessarily reliable. Garbage in, garbage out.
