A computer can recognize a face, but it can't find it beautiful. A computer has memory, but it can't have memories. It can produce images, but it doesn't have any imagination. A computer can learn from its mistakes, but it cannot regret them.
It can compare ideas, but it cannot have an idea. What we call "intelligence" isn't a unique ability, but instead a set of skills, innate or acquired, that requires from us both to know and to ignore, to become emotional and to be detached, to ask questions and to answer. These skills are inseparable from surprises, sensations, intuition, laughter. The very essence of intelligence is that it's human and that it can't be recreated by something artificial. If it became artificial, it would mean that we'd have given up on using our own.
But the topic is raised again and again in the media. As soon as a computer defeats a human being at one game or another, the myth that artificial intelligence will become part of our lives is resurrected. Computers can free us from many tedious chores, but that doesn't mean they'll make us free.
They can help us foresee but not want. They can help us find information, but they won't tell us what to look for. They can analyze the way things are headed, but they can't understand what it means. Hundreds of migrants arrive in Germany every day from Poland, which makes the Belarus border a national issue for Germany.
It's long past time that Europe acknowledge that tough measures are needed — maybe even walls I watched as every few seconds someone swam across the border into European Union territory. Spain's Guardia Civil seized people, dragged them along the ground, opened the gate in the border fence and shoved them back into Moroccan territory. In the space of a few hours, thousands of people came across, an apparently endless stream.
Then the army turned up. Soldiers pushed the crowds back and formed a human wall on the beach. It is also why they are unable to process semantics of any language, Chinese included, no matter what Google translation achieves. This proves that there is absolutely nothing to discuss, let alone worry about. There is no genuine AI, so a fortiori there are no problems caused by it. Relax and enjoy all these wonderful electric gadgets. This might not be accidental.
When there is big money involved, people easily get confused. The Turing test is a way to check whether AI is getting any closer. You ask questions of two agents in another room; one is human, the other artificial; if you cannot tell the difference between the two from their answers, then the robot passes the test.
It is a crude test. Think of the driving test: if Alice does not pass it, she is not a safe driver; but even if she does, she might still be an unsafe driver. The Turing test provides a necessary but insufficient condition for a form of intelligence. This is a really low bar. And yet, no AI has ever got over it. More importantly, all programs keep failing in the same way, using tricks developed in the s. Let me offer a bet. I hate aubergine eggplant , but I shall eat a plate of it if a software program passes the Turing Test and wins the Loebner Prize gold medal before 16 July It is a safe bet.
Both Singularitarians and AItheists are mistaken. Ironically, or perhaps presciently, that question is engraved on the Loebner Prize medal. This holds true, no matter which of the two Churches you belong to. Yet both Churches continue this pointless debate, suffocating any dissenting voice of reason. True AI is not logically impossible, but it is utterly implausible.
We have no idea how we might begin to engineer it, not least because we have very little understanding of how our own brains and intelligence work. This means that we should not lose sleep over the possible appearance of some ultraintelligence. What really matters is that the increasing presence of ever-smarter technologies is having huge effects on how we conceive of ourselves, the world, and our interactions. The point is not that our machines are conscious, or intelligent, or able to know something as we do.
They are not. There are plenty of well-known results that indicate the limits of computation, so-called undecidable problems for which it can be proved that it is impossible to construct an algorithm that always leads to a correct yes-or-no answer. We know, for example, that our computational machines satisfy the Curry-Howard correspondence, which indicates that proof systems in logic on the one hand and the models of computation on the other, are in fact structurally the same kind of objects, and so any logical limit applies to computers as well.
Plenty of machines can do amazing things, including playing checkers, chess and Go and the quiz show Jeopardy better than us. And yet they are all versions of a Turing Machine, an abstract model that sets the limits of what can be done by a computer through its mathematical logic.
Quantum computers are constrained by the same limits, the limits of what can be computed so-called computable functions. No conscious, intelligent entity is going to emerge from a Turing Machine. The point is that our smart technologies — also thanks to the enormous amount of available data and some very sophisticated programming — are increasingly able to deal with more tasks better than we do, including predicting our behaviours.
So we are not the only agents able to perform tasks successfully. This is what I have defined as the Fourth Revolution in our self-understanding. We are not at the centre of the Universe Copernicus , of the biological kingdom Charles Darwin , or of rationality Sigmund Freud.
And after Turing, we are no longer at the centre of the infosphere , the world of information processing and smart agency, either. We share the infosphere with digital technologies. These are ordinary artefacts that outperform us in ever more tasks, despite being no cleverer than a toaster. Their abilities are humbling and make us reevaluate human exceptionality and our special role in the Universe, which remains unique.
We thought we were smart because we could play chess. Now a phone plays better than a Grandmaster. We thought we were free because we could buy whatever we wished. Now our spending patterns are predicted by devices as thick as a plank. The same as between you and the dishwasher when washing the dishes. That any apocalyptic vision of AI can be disregarded. The success of our technologies depends largely on the fact that, while we were speculating about the possibility of ultraintelligence, we increasingly enveloped the world in so many devices, sensors, applications and data that it became an IT-friendly environment, where technologies can replace us without having any understanding, mental states, intentions, interpretations, emotional states, semantic skills, consciousness, self-awareness or flexible intelligence.
Memory as in algorithms and immense datasets outperforms intelligence when landing an aircraft, finding the fastest route from home to the office, or discovering the best price for your next fridge. Digital technologies can do more and more things better than us, by processing increasing amounts of data and improving their performance by analysing their own output as input for the next operations. It is like a two-knife system that can sharpen itself.
We are and shall remain, for any foreseeable future, the problem, not our technology. So we should concentrate on the real challenges. By way of conclusion, let me list five of them, all equally important. We should make AI environment-friendly.
We need the smartest technologies we can build to tackle the concrete evils oppressing humanity and our planet, from environmental disasters to financial crises, from crime, terrorism and war, to famine, poverty, ignorance, inequality and appalling living standards.
We should make AI human-friendly. We will need the processing power and increasingly intelligent insights generated by machines to take on our most pressing global challenges—from tackling climate change to curing cancer—and to seek answers to the deepest questions about ourselves and our place in the wider universe. Attempts by medieval alchemist Roger Bacon notwithstanding, engineers have so far failed in their attempts to emulate the human brain in machine form.
It is quite possible that they will never succeed in that ambition. But that failure is irrelevant. If we wield them wisely and responsibly, they can help us build a better future for all humanity. The views expressed are those of the author s and are not necessarily those of Scientific American. Lord John Browne, trained as a professional engineer, was group chief executive of BP from to , where he built a reputation as a visionary leader, transforming BP into one of the world's most successful companies.
He is now the executive chairman of L1 Energy. Already a subscriber? Sign in. Thanks for reading Scientific American.
Create your free account or Sign in to continue. See Subscription Options. Discover World-Changing Science. Get smart. Sign up for our email newsletter. Sign Up. Read More Previous. Support science journalism. Knowledge awaits.
0コメント