Discover more from Thematiks
What separates humans from AI? It’s doubt
Computers can drive our cars and beat us at chess, but what they lack (for now) is our ability to know when we don’t know
Back in April, the FT had an intriguing piece written by Stephen M. Fleming (Wellcome/Royal Society Sir Henry Dale Fellow at University College London) that examined the issue of metacognition in AI. The piece presented Flemming’s concerns about the most important way in which even the most sophisticated AI machines differ from people: they cannot doubt and so cannot question their correctness and, by implication, their decisions. As he notes:
AI researchers have known for some time that machine-learning technology tends to be overconfident. For instance, imagine I ask an artificial neural network — a piece of computer software inspired by how the brain works, which can learn to perform new tasks — to classify a picture of a dolphin, even though all it has seen are cats and dogs. Unsurprisingly, having never been trained on dolphins, the network cannot issue the answer “dolphin”. But instead of throwing up its hands and admitting defeat, it often gives wrong answers with high confidence. In fact, as a 2019 paper from Matthias Hein’s group at the University of Tübingen showed, as the test images become more and more different from the training data, the AI’s confidence goes up, not down — exactly the opposite of what it should do.
This problem matters because as AI machines become more common, their increasing familiarity has a significant downside because we tend to think of them as “optimized” and somehow more intelligent than humans. Flemming notes that this is a severe error, and that it is critical to consider the nature of AI systems carefully before we casually allow them to enter every aspect of our lives:
The history of automation suggests that once machines become part of the fabric of our daily lives, humans tend to become complacent about the risks involved. As the philosopher Daniel Dennett points out, “The real danger . . . is not that machines more intelligent than we are will usurp our role as captains of our destinies, but that we will over-estimate the comprehension of our latest thinking tools, prematurely ceding authority to them far beyond their competence.”
That phrase “far beyond their competence” might seem strange to say about a machine, but many serious AI thinkers agree with Dennet, who believes that "AI systems that deliberately conceal their shortcuts and gaps of incompetence should be deemed fraudulent, and their creators should go to jail for committing the crime of creating or using an artificial intelligence that impersonates a human being.” In his view, AI systems should be forced to make public all of their risks and shortcomings, much as commercials for medicines do today.
Other researchers agree:
As automated systems become more complex, their propensity to fail in unexpected ways increases. As humans, we often notice their failures with the same ease that we recognize our own plans going awry. Yet the systems themselves are frequently oblivious that the function they are designed to perform is no longer being performed. This is because humans have explicit expectations — about both the system’s behavior and our own behaviors — that allow us to notice an unexpected event.
Flemming, for his part, thinks that we should encourage AI to question itself and to admit when there is a known unknown:
Let’s imagine what a future might look like in which we are surrounded by metacognitive machines. Self-driving cars could be engineered (both inside and out) to glow gently in different colours, depending on how confident they were that they knew what to do next — perhaps a blue glow for when they were confident and a yellow glow for when they were uncertain. These colours could signal to their human occupants to take over control when needed and would increase trust that our cars know what they are doing at all other times.
As Dennet argues in his book From Bacteria to Bach and Back: The Evolution of Minds, we also need to prohibit AI providers from hiding their creations from us. As he puts it: "when you are interacting with a computer, you should know you are interacting with a computer." In his opinion, any system that hides its identity as artificial, something so many AI-based systems try to do today, "should be deemed fraudulent, and their creators should go to jail for committing the crime of creating or using an artificial intelligence that impersonates a human being."
The idea of “introspective robots” may seem far-fetched but then so did self-driving cars not too long ago. Flemming, Dennet, and their peers are correct in their warnings about pre-assigning AI a level of ability it does not deserve. Furthermore, until they fully deserve the trust of humans, there is a strong argument for the AI self-revelation that Dennet proposes. All in all, there is no better time than now to discuss not just giving AI systems the ability for self-doubt and self-reflection but forcing them to share their doubts and true identity with us.