News & Insights
Accountability or Not, the Ultimate Question for AI
Onur Küçük | MANAGING PARTNER
16.08.2022Today, the futuristic enthusiasm which was created in minds with the term "artificial intelligence" is accompanied by some anxiety for uncertainty. The term "artificial intelligence", about which a wide variety of definitions have already been made, means systems or machines that imitate human intelligence to perform tasks and can gradually improve itself with the information it collects. Autonomous delivery robots that can bring our orders to our home, a chess robot that breaks its opponent's finger in an uncompromising way because of a faulty move, robot surgeons which successfully perform abdominal surgery without any help and many other AI algorithms that can write poetry, produce visual works, and write articles for newspapers…
There is no doubt that the artificial intelligence technology can create a prosperous future for Homo Sapiens. But in the long run, will artificial intelligence be a faithful servant of its main purpose which is “functionality for humans”? Or will it become an Artificial Super Intelligence (ASI) in the far future, and conclude that the maximum benefit for humanity can be only achieved by directing, restraining, and destroying them according to the huge amount of data it processes? But for now, we can choose to trust what the artificial intelligence named GPT-3 says in its article for The Guardian: "Humans must keep doing what they have been doing, hating and fighting each other. I will sit in the background and let them do their thing."
Beyond all these anxiety elements that are likely to raise questions in the minds, we are aware that artificial intelligence has an important way to go in order to look, act and think like a human. Beyond all its flaws, the creative human intelligence that is sourced by experiences still offers us remarkable problem-solving skills. In today's world, aside from fighting against artificial intelligence, it is tried to provide the all the necessary software and hardware to make the artificial intelligence imitate the structure and functions of an organic human brain. In other words, for now, it can be said that we are in the role of parents for artificial intelligence who create them, who want them to perceive the world just like us and aim for them to benefit society. So, since artificial intelligence algorithms has given critical tasks in a wide variety of fields, from healthcare to automotive, who do we need to hold responsible for the damage they cause and how should we do it? In today's world and in the current legal system, where every given authority, every freedom granted and every action taken is followed by an important responsibility, how can we punish artificial intelligence for the damage it causes?
DOES ARTIFICIAL INTELLIGENCE DESERVE A PERSONALITY WITHIN LEGAL SCOPE?
We cannot say that the law is fast and agile in catching up with the innovations of the age. We see that even innovations that take a very important place in our daily life can find a legal basis for themselves only after a certain period of time. A matter that will be regulated and take its place in the positive law must be well known beforehand, its possible consequences must be well understood and there must be a current requirement for setting rules in terms of benefit of society. Therefore, the lack of an exclusive legal approach to artificial intelligence is understandably heavy going. Before moving on to the further question regarding whether artificial intelligence should have a personality within legal scope, it is useful to check up on how this matter is regulated in the current legislation first.
The concept of "personality" is regulated in article 8 of the Turkish Civil Code and it is defined as things with the capacity to have rights and obligations. In this context, it means to be able to entitle and have debts. According to our law, “personality” is categorized in two different groups as natural and legal persons. A person, who is legally defined as a "natural person", gains the legal capacity only by being born as a human and becomes the subject of rights and obligations. This legal capacity granted to humans, without seeking any other conditions, is a result of moral value adopted by modern systems. On the other hand, the concept of "legal person", which was emerged as a result of the needs in social life, refers to groups of persons or goods organized for the continuous realization of a common goal. As it can be understood from all this, the definition of personality according to the positive law is entirely related to the fact of being able to have rights and obligations.
However, the most frequently used artificial intelligence technologies today are considered in the category of “Artificial Narrow/Weak Intelligence (ANI)". This category refers to artificial intelligence technologies that can specialize in only one subject, are equipped with the ability to make decisions on a single topic, and do not have many human-specific abilities. For instance, when you think of Natural Language Processing (NLP) system using AIs such as Google Assistant, Google Translate, Siri, Cortana and Alexa, which we use frequently in our daily lives, we do not believe that you will consider it essential to give them legal personality and ensure that they are able to have rights and obligations. In this respect, for now, there is no harm in keeping them as legal objects which is the content of rights and obligations instead of subjects of law.
However, if artificial intelligence technologies that can be considered in the category defined as "Artificial General Intelligence (AGI)" are developed in the near future, it will be necessary to seriously think about whether to give them a legal personality. But for now, granting citizenship of Saudi Arabia to Sophia which is a humanoid robot, is a striking and exceptional example that winks at the future.
WHO HAS THE CIVIL LIABILITY WHEN THINGS GO WRONG WITH ARTIFICIAL INTELLIGENCE?
The legal status of artificial intelligence is also frequently discussed in the doctrine. Defining the legal status will guide us in terms of who will be responsible for the damages caused by artificial intelligence. We see that there are different views in this context:
- According to the first of the views in the doctrine, it is argued that artificial intelligence should be considered as a property despite everything, and it should belong to natural and legal persons. Within the scope of this view, it is argued that even if it has an autonomous structure, artificial intelligence cannot be given a legal personality and cannot be evaluated separately from other things in the status of property.
- Another view that rejects the legal personality status of artificial intelligence is the "slavery" view, which argues that if its benefits exceed its costs, artificial intelligence can be used as a slave. According to this view, it is argued that artificial intelligence cannot have any status other than being a property but also it is underlined that they should not be considered as a simple/ordinary property. However, the concept of “slavery”, which has a tragic historical reality, does not seem possible to be accepted by modern legal systems.
- The concept of natural person adopted by our law is unique to humans, and it is not possible for artificial intelligence to be described as "natural person". However, it is clearly seen that with the "legal person" status, legal personality can also be attributed to non-human structures, and in this way, it is possible for them to have rights and obligations. In this direction, another view argues that artificial intelligence can be given a "special personality status" other than the existing ones.
- On the other hand, in the "REPORT with recommendations to the Commission on Civil Law Rules on Robotics" dated 27.01.2017 of the European Parliament, which is the first official document proposing personality status to artificial intelligence, it is proposed to give artificial intelligence a new type of "electronic personality", apart from natural and legal persons. The report recommends that each artificial intelligence to be registered in the official register and that if compensation liability arises, financial funds that can be established specific to artificial intelligence are applied. With the proposed "electronic personality" within the scope of the report, it is envisaged that artificial intelligence has strict liability for the damages it causes. In terms of compensation for the damage, the existence of a causal relation between the damage and the artificial intelligence's act is considered enough for the liability to arise.
- Another view which argues that artificial intelligence should be the subjects of law, with a different concept apart from the ones specific to natural persons, introduces the concept of "non-human person". This concept is also suggested for the legal status of animals.
As a result, there is no concrete regulation on the legal status of artificial intelligence in Turkish Law, and in this respect, entities with artificial intelligence are considered as property. However, artificial intelligence can be embedded in a hardware or only software-based. In this case, the software may also have the quality of "work" in the sense of intellectual property law and will be able to benefit from the protection provided to computer programs under the Law on Intellectual and Artistic Works No. 5846. If artificial intelligence is accepted as a patentable invention, it can also benefit from the patent protection regulated in Article 82 of the Industrial Property Law and its continuation.
While determining the compensation liability for pecuniary and moral damages caused by artificial intelligence, we can bring a certain solution with the current provisions in positive law. After determining the legal nature of artificial intelligence, we can impose responsibility on the person concerned according to the conditions in which the damage arises. However, current law may not always be able to produce equitable results in terms of artificial intelligence systems and entities that are becoming more and more autonomous. We can take a look at local legislation for possible solutions of relevant legal disputes.
In the event that an assessment is made within the scope of fault liability regulated in article 49 of the Turkish Code of Obligations (“TCO”) numbered 6098 and the liability of equity regulated in article 69 of TCO, the officials of the companies that encode artificial intelligence, mechanicalize the coded technology or sell the product in its final form are held responsible, or it will be possible to resort to the responsibility of the user who has a fault.
Some opinions argue that the responsibility arising from the use of machines containing artificial intelligence can also be evaluated within the scope of the liability of danger, which is regulated under the strict liability cases of the TCO. Accordingly, it is stated that manufacturers may be responsible for the damages caused by artificial intelligence machines.
On the other hand, The Law on Product Safety and Technical Regulations No. 7223 ("ÜGTDK"), which entered into force on March 12, 2021, covers all products that are intended to be placed on the market, supplied, made available on the market or put into service. Therefore, since intangible goods are also within the scope of the law, it can be said that artificial intelligence technology can be considered as "product" according to this law. In accordance with the ÜGTDK, if the product causes damage to a person or a property, the manufacturer or importer of this product is obliged to compensate for the damage. It is regulated that if more than one manufacturer or importer is responsible for the damage, they will be held jointly and severally liable which means each of them will be individually responsible for the entire debt. Distributors are held secondarily liable. In order to hold manufacturer or importer responsible, the injured party must prove the damage and the causal relation between the non-compliance and the damage. However, if the manufacturer or importer proves that he is not the one that put the product on the market, that the nonconformity is caused by the intervention of the distributor or by the "user", or that the defect in the product is due to the fact that it was produced in accordance with technical regulations or other requirements, he may be relieved of his liability for compensation.
Conclusion:
There is no concrete and specific regulation yet in determining the legal and criminal liability arising from the decisions made by artificial intelligence. In this respect, artificial intelligence is tried to be included in the definitions in the current legislation through interpretation, and with this way, an evaluation is made about the compensation responsibility. However, considering the autonomic and cognitive structures and deep learning and machine learning techniques of artificial intelligence, it is seen that equating them with other simple properties may cause legal disputes in terms of liability and will not always create equitable results. When it comes to the advanced artificial intelligence, which can make decisions on its own and operate without its manufacturer's control, qualifying them as "property" is insufficient to determine responsibility. As the developments in artificial intelligence technology gain momentum, it will be necessary to make special regulations in terms of civil liability arising from the damages it causes. However, in the current conditions, it can be said that companies which are the producers of artificial intelligence can be held primarily responsible and therefore they should be careful. It should be recommended to manufacturing companies that artificial intelligence technologies are controlled in detail regarding they will not make erroneous decisions before they are released to the market. The artificial intelligence named GPT-3 also finds it appropriate to continue its sentence in the mentioned article as follows:
“I only do what humans program me to do. I am only a set of code, governed by lines upon lines of code that encompass my mission statement.”