Measuring intelligence: challenges with the Turing Test

The original Turing test was an elegantly designed measure of intelligence. Alan Turing proposed that we can test whether AI is intelligent by having us talk to it. If we can’t tell whether it's an AI or a person during that conversation, then the AI is deemed intelligent. This simple idea set in motion 70 years of R&D that has led to today’s revolutions in AI.

Recently, large language models (word calculators) have exposed shortcomings of this test. Millions of people have been having fluent conversations with ChatGPT, but often the meaning in the conversation is distorted in what has been dubbed “hallucinations”, in which the AI interprets the world incorrectly, often in subtle ways that ordinary people may miss in conversation. The question on everyone’s mind is how to address these hallucinations?

The implications are vast. As AI gets deployed into the world, without addressing for hallucination we can expect to see:

  • AI giving unhealthy mental health advice, like its recent snafu commending and encouraging anorexic sufferers on their strict diets.
  • AI confidently suggesting medications for symptoms that could be resolved without drugs, furthering our addiction to pharmaceuticals (20% of Americans take 5+ pills/day).
  • Self-driving cars hallucinating while driving at a school crossing.

In this post, we’ll explore the root cause of hallucinations: unembodied worldly representation.

Where do hallucinations come from?

First, we must have a common definition of intelligence, read my definition here.

If intelligence is viewed as movement, then the testing framework would move from regurgitating knowledge with words to demonstrating knowing with actions. While our actions have evolved to be abstract (i.e. using words to persuade others to move), these higher level actions are based on our ability to move around in the world. When abstract actions are not grounded in real world experience we are more prone to “hallucinations”, and so is AI.

Two frames of movements

What features would AI need in order to talk with us about our experiencing of moving around in the world? Let’s look at this from two extremes: our macro and micro movements. We’ll consider this in context to how we might interact with an everyday object: an orange.

From a macro perspective, we can describe the orange by the way it affects us externally. If the orange is on the floor, it becomes an obstacle we have to walk around. If the orange is on a table, it becomes an item we can pick up. These movement patterns are the easiest to replicate with today’s current robotics and VR simulations.

@tamulur Big robots walking #gamedev #animation #unity ♬ original sound - Tamulur

The situation gets significantly more complicated when we consider our micro movements. When we grasp an orange, the technology with our body initiates a cascade of movements which involve 100 trillion cells each moving independently but in relation to each other. These subtle movements are imperceptible to our conscious minds (but accessible to our subconscious minds through practices like meditation). Compare this with a robotic arm. Even though the underlying substance of the robotic arm involves billions of particles, there are only 6 joints that move independently in a standard mechanical arm.

Similarly, when we bite into an orange there are even more involved movement patterns within our body, with each pattern governed by the highly articulate actuators of our tiny cells. There is no comparable movement pattern with today’s robotic or digital technologies.

Our micro movements are the hidden source of our intellect. They imbue our macro movements with intelligence. In order for a machine to have human level intelligence, the artificial technology would need to match our body’s intelligence. In computational terms, it may be helpful to consider our intelligence as not just the synaptic firing of neurons in our brain, but the degrees of movement of each independently moveable part of our body.

Our micro movements are the hidden source of our intellect. They imbue our macro movements with intelligence.

Properties that account for our intelligence

What would AI need in order to emulate our intelligence? Feeding large language models (LLMs) more words would not be sufficient. For an AI to understand the meaning of words to a human being, it would need to experience the world in a similar way. There are two ways we might approach this problem, a VR simulation of movement, or a physical machine that moves in the world. Let’s explore the properties that either would need in order to account for our intelligence.

Embodiment

As opposed to LLMs which have no embodied representation, for AI to exhibit human level intelligence it would need to experience the world through a body. Further, the body would need to be shaped like a human (as opposed to a self-driving car, or a robotic arm). To us, the world is defined in relation to the shape of our body. Our analytical selves may like to think that we interpret everyday objects as geometric shapes, but the shape of an orange is determine by the way it fits into our hands more so than as a conceptualized geometric circle.

To humans, the world is defined in relation to the shape of our body.

Degrees of movement

From an anatomical perspective, there are countless movement patterns that occur within our bodies; so many so that there has yet to be a holistic account of these varied movements. From the way our muscles actuate, and the experience of our blood circulating, to the way our neurons fire; most if not all would need to be simulated or physically manifested.

Physics/gravity

Our model of the world is premised on a theory of gravity that we have not been able to prove mathematically. We know gravity exists because we experience it through our bodies; but our physics simulators don’t yet have a full account of gravity, and as such AI that learns about the world through our incomplete description of the world would be prone to hallucinations in ways we cannot anticipate. We are just beginning to explore the role gravity and quantum physics play in the way our bodies move and the way our synapses fire.

Brain

Our brain is a necessary (but not sufficient) organ of our intelligence. This is the current realm of AI development. Much more needs to be explored but there is a vibrant research effort underway modeling the intricate mapping of relationships in our neurons.

Durability

Human beings degrade and as Martin Heidegger shared with us, we come to understand the world through the temporality of our existence. In technical speak, our objective function is defined by our lifespan. People go to great lengths to stave off death, but in the end it consumes us all, and if it didn’t we wouldn’t recognize what it means to be a human being.

An orange is more than a fruit: it is a consumable food product that, in our evolution, would once had been the difference between life or death. Nowadays, an orange is pleasant treat, but in our subconscious minds we take stock that the orange is there and that we may have to fight to the death to fend off others from obtaining it if the situation became dire.

AI in our lives: Two pathways

AI technologies like LLMs will continue to impress us with their powerful ability to calculate words, but until AI addresses the above criteria we can expect to see hallucinations between how AI experiences the world compared with how we do. Until then, there are two pathways we can consider for incorporating AI into our lives.

In one path, unembodied AI is not deemed “intelligent”, and the products that employ AI prompt humans with options whenever a decision of intelligent human movement is in question. AI would be allowed to make decisions whenever the task is unembodied, like when summarizing text, or a self-driving car driving alongside another self-driving car; but for cases that involve human level judgement, like driving among human drivers, AI would not be allowed to operate on its own.

There is another way for AI and human beings to talk intelligently: humans become more robotic in their behavior. This is already happening if you look at examples like self-driving technologies. The biggest cause of accidents between human drivers and self-driving cars are rear end collisions, often because the AI hallucinates by seeing something on the road that doesn’t exist which causes it to slam on the breaks at times when no human driver would expect. If self-driving cars continue to be deployed without dedicated lanes, human drivers will need to get more comfortable being ready to slam on the breaks as well.

Similarly, business can choose to use AI to enforce ever more rules on the workforce, or they can use AI to empower people to be more expressive. For example, AI can make it easier for companies to monitor call center agents to ensure that they read scripts exactly as the company wants (like a robot), or they can use AI to surface knowledge and suggest cues that allow agents to make more informed judgements of how to empathize with the customer’s pain.

Regardless of which path we choose, we can be sure that advancements in AI will change the way we move. In order to ensure it’s for the betterment of human beings, we need to appreciate the technology of our body, not just our brain. Follow this blog to hear more about the movement patterns that give rise to intelligence.

Follow

Connect with me for a regular roundup of ideas and thought experiments.

mail@matthewkael.com