What does it take to be a citizen?
If you take a citizenship test, you have to prove your knowledge of history, law, culture – and fulfill a few other requirements before joining the official ranks of a certain country. Alternatively, by simply being born on the territory of a country you can become a citizen by default.
Citizenship is something many of us take for granted. I was born in the EU, therefore I am a citizen of the EU and, at the same time, one of its member states. Neither was my decision or merit. As a citizen, I have rights and responsibilities. I vote. I own property. I live and work freely on the European territory. I obey laws. I pay taxes. I respect the rights of those around me.
Does any of these imply that I should be human?
In this article we set to analyze some of the philosophical and practical questions and implications surrounding the rise of artificial intelligence and the role that regulators and politicians are beginning to play in making sense of the future.
I, robot, am a citizen
Enter Sophia. Created and activated in 2015 in Austin, Texas, US, Sophia is a robot modeled after actress Audrey Hepburn. Sophia uses artificial intelligence, visual data processing and facial recognition to interact with the environment and has made many impressive appearances at various events. Sophia can talk, understand speech, see and recognize people. Like a child,Sophia is learning. Her speaking abilities and facial expressions are getting more and more sophisticated.
In October 2017, Sophia was awarded an honorary citizenship by Saudi Arabia. While this was more of a PR stunt to promote a conference and the country as a forward-thinking economy, with critics slamming the idea as ridiculous under the current circumstances, Sophia’s citizenship is only a pixel in the high-resolution AI picture.
At one point, they will ask for rights
“We will have to have debates about robot/AI rights and citizenship because at some point they will ask for them,” said Beth Singler, a research associate at the University of Cambridge, for The Verge.
Of course the conversation is uncomfortable. And it contradicts personhood as we understand it now.
Can you buy or create a citizen? What does that mean for democracy? Should robots be allowed to vote? Can an algorithm that has all facts take an informed decision? Can that decision be even better than one influenced by bias and emotion?
Should AI own property? Maybe not a house, but why shouldn’t it own a piece of virtual property? Take the example of Shibuya Mirai, a chatbot that is programmed to help the citizens of Tokyo interact with the government. In December 2017, authorities granted Mirai official residence in the city, according to Futurism.com.
At what point could robots be held accountable for their own actions?
These questions are just emerging and granting rights to artificial intelligence entities is a one way street now. We are having fun with it and making headlines. We are in control because they are not – yet.
AI think, therefore AI am
Artificial intelligence is on everyone’s minds. Just look at popular culture. Ex Machina, Her, Ghost in the shell, Blade Runner 2049, Westworld and Altered Carbon are just a few of the recent movies and series that tackle the many questions and scenarios surrounding artificial intelligence.
On the one hand, we can’t help but contribute to the development of AI, watch in wonder when its applications spring around us and enjoy its perks. Self-driving cars are here: California already approved the testing of driverless cars on roads without human safety drivers. The US Congress is debating how the government and law enforcement can use AI to provide better services to the population. Application of AI and robotics are expected to improve healthcare, such as making surgeries minimally invasive.
On the other hand, we fear AI. A common theme in pop culture is that robots will realize that we are weak and that they can dominate us. And they easily can, we know that. Our bodies are fragile and minds powerful, but limited. Computers and robots already fulfil many of the functions humans can – and they are more efficient at it.
We define our humanity by our consciousness, yet we don’t understand when and how humans gained it and we still struggle to define it. Can we then pretend to understand when and if a robot becomes counscious?
“I, myself, exist, because I think,” said Cutie the robot to the slightly amused, slightly worried humans from Asimov’s I, Robot stories.
They grow up so fast.
When science fiction slowly becomes real
Like the beginning of dreams in Inception, we’ll never know when it started. Unlike the dreams, the AI revolution is real.
You can find articles such as the next ones by the dozen: Lawyer-Bots Are Shaking Up Jobs (MIT Technology Review), AI Software Learns to Make AI Software (MIT Technology Review) and Carnegie Mellon’s ‘Superhuman AI’ bests leading Texas Hold’em poker pros (TechCrunch).
A 2015 McKinsey study showed that “45% of the activities individuals are paid to perform can be automated by adapting currently demonstrated technologies”. Automation will not affect only low-skilled jobs. Financial managers, physicians, and senior executives will also be forced to redefine what they do. In 2016, the same analyst firm analyzed which industries are more likely to be affected by automation. The study found manufacturing, food service, accommodations and retailing to be among the most automatable activities. Managing and developing people, decision making, planning, or creative work will be more difficult to automate. However, as new technologies emerge and as the collaboration between men and machines evolves, developments are unpredictable.
Unpredictability is already on the mind of regulators. Margrethe Vestager, the European Commissioner for Competition, recently warned that automated systems create new challenges for institutions. She gave the example of automatic pricing systems that now act as instructed by people. But what happens in the not-unlikely scenario in which automated systems reach a point where they collude without human intervention?
“We do need to keep a close eye on how algorithms are developing. We do need to keep talking about what we’ve learned from our experiences. So that when science fiction becomes reality, we’re ready to deal with it,” concluded Margrethe Vestager.
Increasing pressure to legislate
It was only 50 years ago that women did not have the right to vote. We regard that now as unjust and take women’s suffrage as a given and natural right. People have been redefining rights and responsibilities of citizens throughout history to reflect new social and political circumstances. And this is likely to happen with robots and AI, too.
Some political actors have already started to react, in an effort to maximize the benefits and minimize the risks of the adoption of new technologies.
“Scientific research on these emerging technologies seems to imply that they will change the face of society. Therefore, even if robots are not yet commonplace, the time has come to legislate,” states a report published by the European Parliament’s Committee on Legal Affairs in 2016.
In February 2017, the European Parliament adopted a resolution with recommendations to better regulate robotics in the European Union. Proposed measures included the creation of a European Agency for Robotics and AI.
Even though legislation is slow to come, there is pressure at the European level to legislate. 66% of the members of the European Parliament support the idea of introducing a Union system for the registration of advanced robots. Moreover, EU’s 28 leaders said at a summit in October that the Commission should hurry to provide a “European approach to artificial intelligence”. More recently, French president Emmanuel Macron expressed his support for the creation of a European Agency in this field.
A prominent figure who is adamant about the need to regulate AI is CEO of Tesla and creator of SpaceX Elon Musk. According to Zdnet, Musk stressed recently that technologies are developing at an unpredictable rate and that people must come together and agree on a framework in which to allow this super intelligence to evolve. Otherwise, Musk believes AI can have a catastrophic impact on humanity. “AI is far more dangerous than nukes, by far, so why do we have no regulatory oversight? This is insane,” said Musk.
“We’re already cyborgs”
Musk had another interesting point: with our computers and phones basically attached to us, aren’t we, in a sense, already cyborgs?
We tend to think of robots and AI as “them”, some entities that are foreign and that we need to, somehow, handle and regulate. But as we’re becoming more intertwined with machines, isn’t redefining social rules for AI really redefining them for us? We may need new rights and responsiblities, too.
There’s no way of knowing how the future will unfold, but, looking at the past, it’s safe to say that those who embrace change the fastest are the ones who stand to gain the most.
By Sabina Varga, journalist and futurist