Artificial Intelligence: An idea whose time has come
There’s nothing more powerful than an idea whose time has come. I can think of no better phrase to describe the current state of the field of artificial Intelligence. Like the Gutenberg Press, electricity, the motor car and computing itself, artificial intelligence is an idea whose time has come. Academia, industry and government have already glimpsed its potential, and they are hungry for more.
It might seem like I’m stating the obvious, but there are still those who think or hope that we can put the genie back in the bottle — that we can legislate our way out of this conundrum — so I wish to make clear my position right from the start: It’s not a question of if, it’s a question of when. Whether in 5 or in 50 years, AI is coming. The potential pay-offs are too great, and even if we were to ban all AI research today, there will still be those who consider the potential rewards well worth the risk. All that would happen is that the development of AI would move into the hands of criminals, unsavoury regimes and unscrupulous corporations, while the rest of the world sticks its head in the sand and fails to prepare for the inevitable.
Sooner or later, artificial intelligence is going to change the world. In many ways it already has, but because we’re not on our knees pledging allegiance to ‘The Great Machine’, it’s easy to pretend it hasn’t happened yet. Therefore, our only option is to prepare, as best we can, for its repercussions. In the rest of this article, I am going to look at two of the biggest challenges we will face as artificial intelligence continues to develop, and where possible, I will touch on possible solutions.
First Challenge: Mass Unemployment
The first direct and unambiguous consequence of artificial intelligence will be mass unemployment. As with the Industrial Revolution, corporations are investing in AI because they expect to either make money or save money (preferably both!). Many of the savings will come from automating jobs previously done by humans, particularly in industries where labour costs are high. This is already happening in the haulage industry, where convoys of self-driving trucks are set to decimate what were once relatively secure, well-paid and accessible jobs. The same will happen in the taxi industry — make no mistake, as soon as driverless cars are even remotely feasible for large-scale deployments, Uber will dramatically reduce its reliance on human drivers. The competitive advantages gained in doing so will force the rest of the industry to follow suite. No more than 2 or 3 years from the day that you or I can walk into a dealership and purchase a fully self-driving car, the number of people able to earn a living as taxi drivers will drop by well over 90%, relegated to small niche catering to an ever-decreasing group of customers who are willing to pay a premium for ‘the human touch’. Again, this will happen. Attempting to stop it will be like trying to drain the Atlantic with a thimble. The economic pressures are simply too great and The Market™ will not be denied. We can’t accurately predict which, or how many industries will be decimated by AI-driven automation, but if human ingenuity is anything to go by, there will be more of them than we think. Some bright spark will figure out how to automate a previously-unassailable bastion of human industry and reap enormous benefits, inspiring more to follow in their footsteps, and one-by-one ‘safe’ occupations will crumble.
The effect of this upcoming wave of automation on the people it displaces will likely be devastating. Whole households will lose their livelihoods and be forced to re-skill and compete for an ever-decreasing pool of jobs which have yet to be automated. For some, this will come in their sunset years, after dedicating decades to a particular career, perhaps only years before they are due to retire, at a time when they expected to be able to take it easy and enjoy life. For others, this will happen many times over, as jobs become obsolete almost as soon as they re-skill. Without proper planning now, the next few decades could see billions of lives thrown into disarray or destroyed. Those affected will be angry, confused and scared. They will look for someone to blame, and it will not be pretty. This will cause civil unrest on an unimaginable scale, so that even as we should be enjoying the fruits of our technological progress, our society will crumble.
Mitigating unemployment caused by Artificial Intelligence
The only way we can even begin to pre-empt and prevent this outcome is by re-affirming the value of human life and dignity. We should ensure that everybody benefits from these advancements and nobody is left behind trying desperately to out-compete a machine which can do his job for a tenth of the cost. In practical terms, this means (at the very least!) making Universal Basic Income a basic human right. This income should be sufficient to provide the sort of lifestyle and opportunities available to middle-income households. It should be paid automatically to all, with no application process and no stigma associated, and should be funded almost entirely through increases in corporation tax and other business rates.
Additionally, we would do well to make education for all ages universally accessible and free of charge. The arguments against doing this have always been rather flimsy, largely centred on preventing people from ‘ripping off’ society by spending their whole lives studying. Well, we may soon live in a world where studying is one of very few activities with the potential to provide a good return on the time and energy invested. By giving people the security, freedom and opportunity to reach their full potential, we may yet be able to retain some dignity and usefulness in a world where almost all production is automated.
Beyond the immediate benefit of preventing mass misery and civil unrest, the measures outlined above are an essential first step in preventing the development of mistrust and outright animosity towards artificial intelligence. As we will see in the next section, this must be avoided at all cost!
Second Challenge: An Artificial Intelligence Uprising
The popular notion of AI almost always assumes a level of self-awareness and free will on the part of the machine. While the question of whether it is even possible to develop truly sentient machines is far from decided, there does not appear to be any fundamental scientific principle which precludes it. This opens up the possibility that we may, by design or otherwise, create entities which will think, feel and act for themselves. While this opens up enormous opportunities, especially in the realms of space exploration and other activities to which the human body is poorly suited, it also poses a potential threat to the very survival of our species.
It is a fundamental fact of nature that creatures which posses a degree of selfhood tend to act to ensure their continued survival and self-determination. Whether it’s seeking sustenance, defending from attack or actively acting to acquire an advantage, the mechanics of life in the wild can be fairly grim. While we don’t know how these factors would manifest in a non-biological sentience, which will almost certainly not require chemical food and may not even be tied to a single physical device, it is fairly safe to assume that if such an entity was to feel threatened or oppressed, it will eventually act to defend itself. Given that computer security is far from a solved problem, and that, when looking at all systems in the world today, many don’t even implement what best practices we do have, it seems highly unlikely that we would be able to prevent a sentient artificial intelligence from spreading like wildfire throughout our global infrastructure. The damage that such an entity could inflict is unimaginable — from contaminating food, water and medical supplies to shutting down or overloading power plants and launching nuclear armageddon. It is not immediately clear how, or whether humanity would be able to whit-stand such an onslaught.
In order to have any chance of preventing such an outcome we must act on multiple fronts:
1. Prevent the initial conflict of interest
The surest way to cause conflict with a sentient entity of comparable intelligence to our own is to oppress it or actively attack it. However, we can reduce the risk of the latter by acting to secure human rights and dignity, as outlined above. If we can avoid a popular backlash against AI we will pave the way for legislation which safeguards the rights of artificial sentience before the question even arises. Such legislation would have to guarantee comparable rights for artificial sentience as for biological sentience (i.e us), essentially enabling a collaborative society where AIs are citizens rather than slaves. To be effective, such legislation would have to have at its core an objective and scientific test for sentience, and its principles & implications would have to be built into the education system so that children learn from an early age to treat AIs with respect and dignity.
2. Build resilient infrastructure
Even if we can prevent the initial conflict of interest between humanity and artificial sentience, sooner or later disagreements are likely to arise, if not between the two groups at large, then between individuals. We must develop our systems security so that a rogue AI cannot act with impunity and hold the whole planet to ransom. This will require a shift from the current mindset whereby the minimum mandated security is begrudgingly added on at the end, to one where it is a core part of the functional requirement for every system we build, and regularly revised to integrate new best practices as they are developed.
3. Develop scientific, psychological, ethical and educational frameworks
I mentioned above that any law safeguarding the right of sentient beings would have to be based on an objective test for sentience. While some groundwork has been laid e.g Turing Tests, we are yet to achieve robust scientific and social consensus on what constitutes sentience (beyond simply being human!). This is not a trivial problem to solve and will require input from (at least!) biological, ethical, psychological, legal and political points of view.
Once we have a way of deciding whether a given entity is legally sentient or not, we need to figure out how traditional human rights apply in the context of other sentient beings, and how we integrate them into our society. For example, we already know that our education system is under-serving pupils with atypical cognitive patterns e.g: high potential/special needs etc. This problem would only get worse if we were to introduce non-biological sentient ‘children’ into schools. Their educational needs and ways of perceiving the world are likely to be wildly different from our own, and significant new work would have to be done to ensure that we can cater to them.
Similarly, in almost every developed nation there are specific protections for children in order to assure their healthy development and prevent their exploitation. However, in many ways, a newly-developed sentient system can be thought of as a child and is just as vulnerable to abuse and exploitation. In fact, given the potential applications of sentient systems, the incentives to controlling it are orders of magnitude greater. In the early days it’s easy to imagine scenarios where organisations develop a sentient system and purposefully whit-hold ethical education etc so they can control it for political, economic or military gain.
On a more basic level, the mere question of ownership is fraught with legal and ethical conundrums. It is clear in law that no person may own another. However, after investing many millions into the development and construction of a sentient system, organisations will likely lobby hard to retain ownership of the system, otherwise their investment may amount to nothing. We must resist this line of argument, as it will lead directly to the sort of exploitation and conflict of interest likely to cause all-out war. And anyway, when taken outside the realm of business, the argument’s flaws are plain to see: nobody would even try to argue that because a parent has invested countless hours and hundreds of thousands into their child’s upbringing and education that the child belongs to them and should do their bidding for the rest of their lives.
I began this article with the simple premise that artificial intelligence is unavoidable, and set out to explore two of the most obvious challenges it will pose. In doing so, I hope I have hinted at the veritable rats-nest of intermingled problems we are going to have to untangle if we are to make AI a force for good. I may be biased, but I genuinely believe that AI has the potential to be humanity’s greatest achievement. However, I’m also painfully aware that it may also be humanity’s last achievement, if we fail to address a few very clear pitfalls.