Measuring intelligence: challenges with the Turing Test
The original Turing test was an elegantly designed measure of intelligence. Alan Turing proposed that we can test whether AI is intelligent by having us talk to it. If we can’t tell whether it's an AI or a person during that conversation, then the AI is deemed intelligent. This simple idea set in motion 70 years of R&D that has led to today’s revolutions in AI.
Recently, large language models (word calculators) have exposed shortcomings of this test. Millions of people have been having fluent conversations with ChatGPT, but often the meaning in the conversation is distorted in what has been dubbed “hallucinations”, in which the AI interprets the world incorrectly, often in subtle ways that ordinary people may miss in conversation. The question on everyone’s mind is how to address these hallucinations?
The implications are vast. As AI gets deployed into the world, without addressing for hallucination we can expect to see:
- AI giving unhealthy mental health advice, like its recent snafu commending and encouraging anorexic sufferers on their strict diets.
- AI confidently suggesting medications for symptoms that could be resolved without drugs, furthering our addiction to pharmaceuticals (20% of Americans take 5+ pills/day).
- Self-driving cars hallucinating while driving at a school crossing.
In this post, we’ll explore the root cause of hallucinations: unembodied worldly representation.
Where do hallucinations come from?
First, we must have a common definition of intelligence, read my definition here.
If intelligence is viewed as movement, then the testing framework would move from regurgitating knowledge with words to demonstrating knowing with actions. While our actions have evolved to be abstract (i.e. using words to persuade others to move), these higher level actions are based on our ability to move around in the world. When abstract actions are not grounded in real world experience we are more prone to “hallucinations”, and so is AI.
Two frames of movements
What features would AI need in order to talk with us about our experiencing of moving around in the world? Let’s look at this from two extremes: our macro and micro movements. We’ll consider this in context to how we might interact with an everyday object: an orange.
From a macro perspective, we can describe the orange by the way it affects us externally. If the orange is on the floor, it becomes an obstacle we have to walk around. If the orange is on a table, it becomes an item we can pick up. These movement patterns are the easiest to replicate with today’s current robotics and VR simulations.
@tamulur Big robots walking #gamedev #animation #unity ♬ original sound - Tamulur
The situation gets significantly more complicated when we consider our micro movements. When we grasp an orange, the technology with our body initiates a cascade of movements which involve 100 trillion cells each moving independently but in relation to each other. These subtle movements are imperceptible to our conscious minds (but accessible to our subconscious minds through practices like meditation). Compare this with a robotic arm. Even though the underlying substance of the robotic arm involves billions of particles, there are only 6 joints that move independently in a standard mechanical arm.
Similarly, when we bite into an orange there are even more involved movement patterns within our body, with each pattern governed by the highly articulate actuators of our tiny cells. There is no comparable movement pattern with today’s robotic or digital technologies.
Our micro movements are the hidden source of our intellect. They imbue our macro movements with intelligence. In order for a machine to have human level intelligence, the artificial technology would need to match our body’s intelligence. In computational terms, it may be helpful to consider our intelligence as not just the synaptic firing of neurons in our brain, but the degrees of movement of each independently moveable part of our body.