Saudi Arabia granted citizenship to a robot named Sophia last month, making her, from our count, the first robot citizen of any nation. Sophia, a humanoid machine created by the Hong Kong-based robot manufacturer Hanson Robotics, explained in an interview at the Future Investment Institute conference, where her citizenship was announced, “I want to use my AI to help humans live a better life, like designing smarter homes, build better cities of the future, etc.”
Sophia’s citizenship is a stunt, to be sure. But there is a serious, and growing, trend towards considering new legal rights and responsibilities for sophisticated technology, giving a whole new meaning to the phrase “legal tech.”
Sophia, the Citizen
Sophia’s citizenship isn’t even her biggest claim to fame. She’s appeared on Jimmy Kimmel, spoken at the United Nations, and even became a bit of an internet sensation due to her nearly-but-not-quite-human expressions.
(If you’re a bit put off by Sophia, you’re not alone. Sophia’s ability to emote is certainly uncanny, in a somewhat unsettling way—hence her popularity as an internet meme. It’s worth noting, too, that Freud’s entire theory of the uncanny began with a doll “who is to all appearances a living being.” But we digress. Suffice it to say, if Sophia is the future, we have a pretty weird future ahead of us.)
Stunt or not, Sophia’s citizenship raises at least a few interesting legal issues. As John Frank Weaver, author of “Robots Are People Too” and a human attorney who specializes in AI law, notes in Slate, citizenship could confer Sophia the right to vote and be elected and “have access, on general terms of equality, to public service in [her] country,” under the International Covenant on Civil and Political Rights. Unfortunately for Sophia, she is a citizen of Saudi Arabia, one of only a few countries to have not signed the covenant—and where a restrictive government general limits women’s (and potentially robot women’s) participation in public life. Should Sophia, or one of her sister models, gain citizenship in a signatory nation, that could have significant legal impact. As Weaver writes:
[T]he covenant’s treatment of citizenship suggests that, in general, a citizen is assumed to be a person and is therefore also entitled to the rights of noncitizen people. Being a citizen in one place could mean being a legal person everywhere else.
And, now a citizen of one country, it’s not impossible (though highly improbable) that Sophia could seek naturalization elsewhere. Again, Weaver explains:
Now let’s say that Sophia’s Saudi citizenship was a strategic move by Hanson to game American legal rights. What could Hanson get with a robot citizen? Under the Constitution, citizens can vote, serve on juries, and get elected to public office; corporations cannot. If Hanson—or any other forward-thinking A.I. developer—is thinking of the long-term consequences of citizenship for A.I. and robots, these are important rights that they gain controllable access to with an artificial citizen.
For the meantime, though, Sophia seems content to stay in Saudi Arabia, where she will live as part of a planned city with the world’s first robot majority—between her appearances on late night T.V. and before the U.N., that is.
Taking Robot Rights, and Responsibilities, Seriously
Elsewhere in the world, governments are taking a serious look at new legal statuses for robots and artificial intelligence, along with novel legal responsibilities. Just a few weeks ago, Bloomberg reported that Estonia is considering granting robots a legal status “somewhere between having a separate legal personality and an object that is someone else’s property”—making them, legally, neither human nor object, but something in between.
The proposal is one of several that Estonia’s Economy Ministry is examining, according to Siim Sikkut, the country’s Government Chief Information Officer. (Yes, Estonia has a CIO.) “If we seize this opportunity as a government, we could be one of the trail blazers,” Siskut told Bloomberg, saying that rules could be adopted “within a couple of years.”
Estonia’s prospective regulation comes less than a year after the European Parliament passed a resolution calling for further research into possible changes to existing legal frameworks in order to account for issues raised by smart robots. “Whereas humankind stands on the threshold of an era,” that resolution states, “when ever more sophisticated robots, bots, androids and other manifestations of artificial intelligence seem to be poised to unleash a new industrial revolution, which is likely to leave no stratum of society untouched, it is vitally important for the legislature to consider its legal and ethical implications and effects, without stifling innovation.”
The development of smart robots raises a host of legal issues, including, for example, issues of civil liability. New autonomous and cognitive features, for example, allow robots to “ perform activities which used to be typically and exclusively human,” and has thus “made them more and more similar to agents that interact with their environment and are able to alter it significantly,” according to the resolution.
Autonomous robots, able to impact their environment outside of direct human control, raise the question of whether such robots fit into existing legal categories or require new categories to be created. Under the current legal system, “the traditional rules will not suffice to give rise to legal liability for damage caused by a robot.”
The resolution does not call for a single answer to those questions, but rather asked the Commission on Civil Law Rules on Robotics to submit a proposal for legislative and nonlegislative instruments to address these legal issues. Some of the possible instruments to be considered include a compulsory insurance scheme for damage caused by one’s robots, a compensation fund for damage caused by uninsured robots, and “creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.”
Of course, not all issues related to robot development are legal issues. As the draft resolution and report initially noted, there is “a possibility that within the space of a few decades AI could surpass human intellectual capacity,” eventually threatening humanity’s “capacity to be in charge of its own destiny and to ensure the survival of the species.”
We might need more than just robot citizens and European regulations to prevent that.