I’ve noticed recently that we seem to be lacking some key tools in our discussion about Artificial Intelligence and what the future holds.
Elon Musk, Bill Gates and Stephen Hawking were warning about its potential to kill us all (small risk of a big negative outcome), but others, like Mark Zuckerberg, are less worried and more optimistic.
When we try evaluate these opinions I think all of us are being mislead by a simple stumbling block — we’re talking about “Intelligence” as if it’s a single thing that can be replicated artificially, when really it’s multiple different things, lots of which we don’t really understand.
This is a quirk of language that is blocking us in a few areas, where we started using a single word to describe what turned out to be 2 or 3 different underlying concepts.
“Sound” is a great example of this fallacy. “If a tree falls in a wood and no-one is around to hear it, does it make a sound?” isn’t really a philosophical problem, it’s just an English language problem.
“Sound” is just one word, but it’s really three concepts. First there’s the generation of sound. A tree falling creates ripples in the air which radiate outwards. Then there’s the reception of sound, where a human ear is physically vibrated by these waves and converts them into electrical signals in nerves. Then there’s the perception of sound, where the brain takes these inputs and creates the “sense” of sound.
When you break it down this way, the initial question about trees in woods makes less sense. Of course it generates sound waves, but it doesn’t get received or perceived if there’s no person there.
Super simple if we have the right words to tackle the question.
How Far Off is General Artificial Intelligence?
If the tree-in-a-woods question made no sense without a splitting “sound” into 3 separate concepts, I think the AI question makes no sense if you don’t split “intelligence” into a dozen.
I don’t really know what all those component parts are, but we can start getting towards them by asking questions that smash some assumptions.
Are the “smartest” people in the world those with the highest IQ? Not really. Lots of the most successful people in history have had decently-high IQs, but pumping a brain’s computing power up to a 170 IQ hasn’t historically made super-humans. So is it right to assume that super-computing-AI will be necessarily “super human”?
How much of our intelligence resides in our brain? In every family there’s usually one person who knows how to fix the router or the printer, one person who does most of the organising, one person who remembers all the family birthdays. A huge amount of what has made human civilisation possible and successful is our ability to store some of knowledge in the brains of others. If you don’t know something, knowing who to ask can often be just as good (or better, if your network can store more information than you could alone).
I think this is why humans so readily adopted external knowledge storage — from writing to books to computers to wikipedia — we are evolved for networked intelligence stored in the minds of others. That’s also why a bad break up or the death of a loved one can feel like losing a part of yourself, because in many ways it is.
Are intelligence and life synonymous? I think the word “life” is also a candidate for being split into two concepts. In some senses, life describes the biological process at the heart of evolution. DNA codes for plants and animals, and if it’s good DNA it makes successful plants and animals which replicate that DNA (by making babies). And so “life” can be viewed as the process by which information codes for biology — plants and animals and eyes and camouflage and brains and even intelligence — which in turn replicate that code. Under this definition I’d agree with Stephen Hawking when he says that computer viruses should be considered as a form of life, because they are strings of information which code for their own replication.
The alternative concept which “life” describes, which I think most non-biologists would use, is the machine that DNA codes for. The plant or the animal. The living being. It also coded for intelligence in many animals — flight intelligence in birds, the intelligence to minutely discern chemicals in air (smell) in dogs, or the ability to consider the thoughts of other animals in humans. With AI we’re trying to replicate *some* of that intelligence, but also to create different kinds of intelligence, computer intelligence, which will probably be just as different from human intelligence than the latter is from dolphin intelligence.
Is Intelligence even objective? Maybe we find it hard to define because it doesn’t really exist. It could just be a value that we map onto entities by describing them as intelligent. We have decided that a dolphin’s social habits are intelligent, but a plant’s phototropism is not. Or as the tweet below says, it’s a “social value judgement.”
TL;DR
In essence, I’m worried that we’ll be hitting very big stumbling blocks as we all start to discuss AI more and more, if we don’t first recognise that when we talk about “Intelligence” more broadly, we don’t really know what the heck we’re talking about.
Much of this post was inspired by one of the best articles I’ve read recently, which aims to remind us that “intelligence is both defined and limited by the context in which it expresses itself.”