Should we fear the rising wave of artificial intelligence (AI) sophistication? The giant leap of technical development we are witnessing has put a big question mark over the fate of our mechanical creations, or better yet, our own fate!
A scary future might well be ahead of us, at least that is what some of the brightest minds on the Planet are thinking. Whithin a few days you may have heard geniuses such as Bill Gates or Elon Musk express their fear of a future controlled by robots. Stephen Hawking even warned that “Artificial Intelligence could spell the end of the human race”.
Isn’t it time that Law gets its say in this troubling issue?
The long-time futuristic concept of “Robotics” has become a tangible reality. Confronted with the frightening progress of technology and the wide expansion of artificial life, scientists fear the worst. On January 2015, world’s leading AI builders together with experts in economics, law and ethics signed a letter with the final goal of optimising the control over intelligent agents while maximising the future benefits of AI. Coordinated within “The Future of Life Institute”, a committee of experts explores and analyses the multiple problems and questions surrounding the construction of intelligent entities.
Between danger and progress
From Czech robotnik “slave” or robota “forced labour”, “robot” is widely used as a generic word to refer to a machine capable of performing complex actions automatically. When combined with AI, the scientific field of study which deals with the creation of intelligence, robotics may represent potential danger.
The rapid technical growth over the past few years has proven beyond a doubt the benefits of technology and AI. Robots have played a crucial role in simplifying our lives, thus strongly confirming their place in our society. Mechanical labour has been significantly automated saving us useless efforts and time while guaranteeing a high quality feedback. The future of AI looks prosperous: complex surgical interventions are now carried out by intelligent machines; demining robots will save lives and they’re ready to be deployed; we have created robots able to perform martial arts or sexbots conceived to satisfy one’s craziest fantasies; in-house robots for the lazy ones out there etc.
Nevertheless, such a proliferation of machinery carries inherent dangers. One of the major risks touches directly the labour sector especially white-collar jobs. According to The Telegraph, robots are to replace estate agents and traffic wardens by 2065. Experts say the advent of AI represents a serious threat to 35% of Britain’s jobs for the upcoming 20 years, thus confirming Keynes’ theory on “technological unemployment”. With the sophistication of AI the number of positions that are most likely to be taken over by a machine is alarmingly increasing. And as low-skilled jobs tend to disappear, employees are required to develop new skills in order to prevail.
Autonomous weaponry represents an imminent danger which deserves full attention in order to prevent a Terminator scenario from happening. Scientists are concerned. Standing on the side of sceptics, Bill Gates calls for a reflection. According to Gates, “there will be more progress in the next 30 years than ever”. It is time to tackle the issue before we face the inevitable consequences of a robocalypse and the danger of what the scientists call “technological singularity”, the hazy stage of technological entities’ full autonomy.
The legal challenge of setting up an ethical framework
We have successfully faced major existential dilemmas so far. This time, it’s a race against the clock. How to “rationalize” robots’ autonomy while mitigating inherent risks? We need to build machines able to reflect on their own behaviours as they increasingly run themselves. From a legal standpoint, many questions arise: legal qualification of robots and “electronic personhood”, liability issues or intellectual property rights administration, to mention a few. At this time, there is no complete legislation defining the frame of machines’ interconnections. Unfortunately, Isaac Asimov’s “laws of robotics” remain very insufficient and largely inadequate to reality. However, serious efforts are being made on this matter. In 2006 the European Union adopted a directive which regulates the specific sector of machinery. Many initiatives have also been funded by the EU in an attempt to set up a legal framework through guidelines adapted to the specificities of robots.
But there’s a huge gap between what is legal and what is moral. Can we integrate an ethical dimension to the process of building an entity void of any moral sense?
In order to introduce a moral-by-design concept for our future generations, an understanding of the physical environment by the robots is fundamental. Coding morality into an algorithm implies in the first place a clear and complete understanding of morality itself. That suggests being able to distinguish the machine’s capacity to respond to an order and its ability to tell right from wrong. An ambitious project is being carried out by the US Department of Defense in an attempt to optimise the use of robot soldiers in a war zone. By breaking down the multiple components of human morality scientists are able to integrate into an algorithm basic behaviours that will nourish a robots’ “awareness”.
When Deep Blue beat Gary Kasparov at chess in 1997 a strong message was delivered: thinking machines are designed for perfection. When confronted to human deficiency they triumph. Some believe AI is number one risk for this century. The arbitrary power of humankind is very likely to shrink as big data flourishes, thus defining the frames of an embryonic machine consciousness. We are exposing ourselves to the growing danger of being manipulated by our own creations as our health is becoming increasingly dependent on automated tools, heart pacemakers or artificial limbs.
At the risk of stifling innovation, robotics needs to be regulated. Safeguards and ethics must be intrinsically embedded into robots as they grow more sophisticated and autonomous. In this perspective, Google has set its AI Ethics Board to control its work on robots. With the acquisition of seven robotic companies in December alone – including DeepMind and Boston Dynamics – the Silicon Valley giant is getting ready for what could become the ultimate challenge of the future. As robots grow autonomous, technicians and designers have the responsibility to tackle the issue with devotion and assertiveness by going beyond the mere PR objective of showing people that “we care”. As many other scientists, Elon Musk shares his concern and warns that by committing ourselves to this blind race towards progress and innovation “we are summoning the devil” (Google, “don’t be evil”!).
We’re unconsciously going down a road bearing the shadow of the growing threat of robots outsmarting humans. Allowing robots to become smarter than us – is that really a smart choice? This underappreciated topic warrants serious consideration, now more than ever, before mankind winds up being mastered by its own creations.
Image “Nao At Work“ by Marc Seil, fotocommunity.com. Some rights reserved. This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License.