When folk have declared rights and written laws, they have tacitly presumed that only humans posess the faculty of intelligence and some of the related competances, such as a conscience, that go with it. This obliges them to engage in ungracious wriggles to specify exceptions for humans whose intelligence is direly impaired and for psychopaths acting with intelligence but no conscience. It also shirks a moral obligation we may one day have to face: to what extent should which rights and laws be extended to include non-human intelligences ? We shall have to face that question if we meet aliens from another world, or if the money poured by governments and large corporations into research into artificial intelligence ever bears useful fruit. Some would argue that we already face it, as concerns such intelligent animals as dolphins, but are in denial about this.
In any case, I consider it worth examining our laws and declarations of rights with an eye to these questions: even if we never do have dealings with non-human intelligence, our laws and rights shall be clearer if they are expressed in terms of what is essential to them, rather than in terms which in practice presently have the same effects, due to the accident of the intelligences, and beasts with conscience, to which we can think to apply them all happen to be human. Where it comes to intelligence, I find it most constructive to do such examination in terms of artificial intelligence, AI – partly because I do believe we shall one day share the world with AI, but pragmatically because governments do pour money into research whose stated goal is to produce AI and so it would be absurd for governments, in writing laws and defining rights, to ignore the possibility that that research might actually achieve its stated goal.
In considering the rights of an intelligence, a natural starting-point is the universal declaration of human rights (UDHR).
We must clearly bear in mind the limitations imposed by Articles 1, 29 and 30; extending rights to an artificial intelligence shall be problematic except in so far as its creators can teach it some semblance of a conscience or, at the very least, a sense of responsibility and respect for the rights of others and its duty to uphold those rights. Yet, if it meets these requirements, the rationale for the UDHR, along with Article 2, surely imply its entitlement to rights equivalent to those we extend to humans, albeit we may need different wording. Likewise for any other intelligence, in so far as we are able to recognise its intelligence and conscience.
Articles 18 and 19, covering the rights to freedom of thought and speech, are obviously applicable without change to any intelligence, whether human or not, with Articles 20, 26 and 27 closely following, to ensure freedom to learn and to participate culturally. Applying Article 4's prohibition of slavery and Articles 6–12's rules about law to an artificial intelligence is morally necessary, for all that it shall seem radical to some: indeed, it is this moral imperative that makes the question of whether animals are intelligent so contentios, since it must surely apply to any animals we acknowledge as sufficiently intelligent.
Applying Articles 3 and 5 shall require some care and thought, just as
with Article 1, but again the moral imperatives contained in them are
unequivocal: liberty is a right and cruelty a crime. While freedom of
movement takes on a wholly different form, for a potentially geographically
distributed process in software, Articles 13 and 14 have plain application to
allow the intelligence to move about the internet. Article 16 is hard to
comprehend in connection with an artificial intelligence – what is its
family ? – but clearly enshrines its right to create new artificial
intelligences, subject to the same
parental responsibilities applied to
its own creators – although the boundary between this right and freedom
of movement may be hard to draw.
Meat-space folk may reasonably be concerned about applying Article 17 (property) to an artificial intelligence; but its freedom from slavery must surely allow that, in so far as it does work that is of use to us, it is entitled to payment, at the very least in the form of granting it access to computer resources on which to run. To limit its right to property to only this would amount to an indirect way of enslaving it: so we must provide for it to receive payment in the same forms as humans receive for their work – especially considering 23.2's provision of equal work for equal pay – and to participate in trade. To that end it must be allowed to own property and to exercise the same rights over property as any other owner may, including entering into contracts concerning the use thereof. Articles 22–24 follow as necessary consequences of the right to property and freedom from slavery.
Among the more contentious rights would be Articles 15 (nationality) and 21 (participation in government). Given freedom of movement, an AI may well be even less tethered to any nation than is a corporation. None the less, especially in jurisdictions that do extend these rights to corporations (whose shareholders are apt to be just a geographically dispersed as a distributed AI might be), the grounds for granting such rights to corporations are largely applicable also to an artificial intelligence. In any case, these rights are necessary to any attempt to morally justify imposing the laws and taxes of any nation on such an intelligenceb: they are the basis of the (sometimes tenuous, to be sure) argument that we consent to our governments' laws and taxes in general, even if we are personally opposed to certain laws and taxes that our goverments have put in place (allegedly) in response to the wishes of our fellow citizens.
Many cartoons by Zack Wienersmith touch on the topic of AI; including this one, on how little difference there is between an AI capable of doing useful work for us and one capable of seeing that it has no good reason to do our bidding.Written by Eddy.