“I propose to consider the question, ‘Can machines think?'”
Alan Turing (1950)
There are two opposing and equally far-fetched approaches to Artificial Intelligence, the first is that AI can achieve human like intelligence, solving any number of unresolvable tangles which haunt society today. The second is a kind of paranoia which ranges from massive unemployment to something approaching a Terminator movie. Both approaches hugely overestimate the capacity of AI and the great obstacles which still remain for the progress of machines towards general intelligence.
I am a huge fan of sci-fi. So, for me, Arthur C. Clarke’s, seminal “2001: A Space Odyssey” is as important as Homer’s original Odyssey. The book (and film) was ground-breaking in its narrative of intelligence, both the evolution of human and machine intelligence. In the story, a computer system managing a space ship becomes both self-aware and paranoid with catastrophic results. To us, looking back, it seems way ahead of its time; the book was written in 1968. Normal working lives were scarcely affected by the nascent AI research industry. Yet Arthur C. Clarke had plenty of material to work with. 1968 was towards the end of what is now called the Golden Age of AI. Yes, that is right, the Golden Age of AI is long gone. The ramifications of those incredibly productive years are clearly evident in today’s digital era. But, productive as it was, AI researchers were consistently thrown back by the complexity barrier. Many of the successes of AI in controlled and limited situations simply do not scale up to real world problems. As Elon Musk recently discovered to his cost, “humans are underrated”. Do not doubt that work in the Information Age is changing radically. Some jobs will disappear. Future of Work (#fow) researchers point out that there will also be plenty of new opportunities for those with the most human of skills: soft skills, creativity, innovation and leadership. There is no doubt that AI is allowing us to do things unimaginable previously, but what can’t they do?
Perception One of the five components of general intelligence, perception is arguably the biggest challenge for AI researchers today. It does not just relate to our five senses (machines may have more) but how these are integrated and interpreted. This takes an astonishing amount of processing power, but it is the integration of sensory data and relating it to the external environment which causes the biggest headaches. Clearly, there has been progress in this field (e.g. driverless cars) but it has not been easy. It is a good example of the complexity barrier. For example, an automated robot which retrieves parcels in an Amazon warehouse, actually moves the whole shelf unit containing the target parcel. A human then selects the parcel from the unit. The robot moves, with the assistance of sensors, within a grid which keeps the problem simple. You cannot ask the robot to deviate from its core task. It is not worth the expense of equipping the robot with the perceptive capability to select a single a parcel from the shelf, because the coordination required is very challenging. But a human can do this easily. What does this mean for the future of work? Robots cannot easily move or manipulate objects outside of narrowly defined and carefully constructed criteria. Notwithstanding the progress being made by driverless cars, it does not look like this barrier will be overcome very soon. Humans will still need to be used where heavy reliance on the senses is required.
Finding solutions to complex problems From any given position on a chess board, there are usually 35 possible moves. But 10 moves on from that position there are roughly 3 million billion board positions. When searching for solutions in a complex situation , the possibilities become practically endless, even from relatively simple start conditions. This is the basis of complexity and chaos theories; even when we confine a system to simple rules, the same system creates unpredictable states very quickly. Although computers can check the validity of a given solution quicker than a human, and they have mastered chess, some problems are simply too complex. These so-called NP-complete questions turn out to be exactly the sorts of questions we would really like AI to solve. In other words, human creativity is still needed to solve many problems, even if AI can check our answers much more efficiently.
Correlation vs. cause AI relies more on correlation than understanding causal effects in a data set. This has led to some hilarious findings when AI is used to analyse big data, some of which has been paraded in the press as a clear of example of AI gone wrong. In fact new techniques are being used, such as Bayesian data analysis which may help. Nevertheless, AI is yet to beat humans at understanding causal effects when analysing data. Because AI is still a long way from possessing the sort of general intelligence humans can deploy, it is an equal distance from the faculties of reasoning which can tell cause from correlation. Consequently data scientists are ever more in demand for their ability to interpret the outputs from sophisticated statistical programmes and AI. Once again human creativity and reasoning will become ever more in demand.
Bias This year IBM discovered that some of their face recognition software, which uses AI, is significantly more accurate for white male faces than other types of faces. This is because the AI was trained on data which included more information of this type. This is because the data were collected from the internet which includes far more information about white men than any other group, even though this is a minority group in terms of the world population. It is hard enough for humans to understand that the world presented to them by various media is in no way representative of the real world. But AI finds this even more difficult. Despite all the break throughs with big data, there is a serious problem with bias. AI, at the moment, is especially vulnerable to this. So although humans are inherently biased, as our painfully slow progress with equality shows, we are at least aware of our flaws.
In summary, AI is capable of wonderful and dreadful things. It continues to penetrate and transform our society. But, as ever with new technology, we consistently over-estimate the capacity of AI. We live in the confident hope that AI will change society for good. There will also be plenty of room for human creativity, soft skills and general reasoning, because AI is not so bright as we often think it is.