top of page
  • Writer's picturebrillopedia

LIABILITY OF STATE FOR NEGLIGENCE BY AI

Author: Anjali Shekhawat, IV year of B.A.,LL.B(Hons.) From Rajiv Gandhi National University of Law.


ABSTRACT

Artificial Intelligence is taking over in every field today, including health care. When dealing with health-related issues, one has to be cautious as a person's life might be at stake. This study would address the issues of medical negligence by AI and legal liability. Artificial Intelligence is based on algorithms and uploaded information by manufacturers, and therefore making them legally liable is highly complicated. This project would focus on the legal liability of the government arising out of AI's negligence.


INTRODUCTION

In ancient times, when the monarchy was prevalent in our society, kings were given immunity for their actions, which would help them rule in a better way, without any disturbances or shield them from getting into any sort of troubles. People had no say in the workings of the state. Many dynasties followed this system. “In the British Empire, the legal maxim “rex non potest peccare,” which means ‘the king can do no wrong’ was widely famous.” The king was never made responsible and was considered above every law. The decisions of the king were accepted to be the final one. Under these circumstances, the empires and the ministers did many wrongs, which always went unpunished.


The immunity led to the exploitation of powers given to the state. Later, the courts realized the necessity of making the government liable for its action. The question arises of how to deal with such liability in case of AI. Artificial Intelligence is taking over in every field today. AI has evolved and covered almost every field. AI has now taken over in nearly every aspect of our life. Artificial intelligence is based on the principle that human intelligence is defined in a way that a machine can easily mimic it and execute tasks, from the simplest to those that are even more complex. The goals of artificial intelligence include learning, reasoning, and perception.


Negligence by AI

AI is linked to those works which require much more than these regular digital algorithms. The question to consider is who has legal responsibility when an AI system or device makes a decision that results in harm? There is currently no regulatory framework that answers this question applicable to a broad range of AI. There are many problems in making Artificial Intelligence liable for the wrongs committed by it as it has not been recognized as a legal entity. AI is based on algorithms, so, therefore, it could be checked if the algorithms are correct or not, the machinery are working or not, the primary test has to be done before relying on Artificial Intelligence. These are the duties of the manufacturer to check that the sample is up to the standard to determine if they can be used without any harm or not.


The most feasible option left is to make the manufacturer or the owner liable for its malfunction which has occurred due to malfunction that could have been avoided. Another great limitation of Artificial Intelligence is that the software and algorithms that it is based on are inserted in its memory by human beings, so technically, Artificial Intelligence can only assist humans in some places, but it cannot replace them in everything. After all, an Artificial Intelligence based program is just a machine which can malfunction anytime and put someone’s life in danger. There is a need to realize that AI cannot be trusted with everything and that AI cannot and should not take over us in all the aspects of our life, it is just creation of human beings that should only be used to help them in their works, not to replace them. Another big concept is morality and ethics, which would be very difficult to induce in an Artificial Intelligence program; and people’s life is surrounded by these two in every scenario.


The first examples of AI cases are beginning to appear, for example, a class action started in early 2017 against Tesla over an automated vehicle’s autopilots system, claiming that it contains inoperative safety features and fault enhancements. This is just an example of the implementation of AI in the various fields. Since there is no fixed legal jurisprudence or laws or policies to guide us on the point of AI’s negligence, a conclusion can be drawn that the manufacturer is to be held liable if there is an error in software installed or algorithms installed. If the user uses it in any manner that would lead to malfunction and would harm others, then the user must be held liable for using it in that manner. The company or owner, which is allowing AI’s usage under their name is liable if they did not take all the reasonable and necessary precautions can manually before letting it be sold in the market. When the state is providing certain service, it is assumed that the services provided are with proper cause and caution; and are for the welfare of the citizens. State being a powerful entity makes people believe it will work in a fair manner. Citizens tend to trust the state and therefore, if there is any negligence by the state or state employees the state has to provide compensation and is made liable for the wrongs committed. In the same way as a doctor or hospital is made liable the state would also be made liable for the negligence for services provided in government hospitals or by the state in any other medical facility. The state would be made liable if there has been a lack of diligence applied by the doctor. Before making the state liable the questions that arise are: has the state conducted a proper test of the hospital employees before appointing them? Has the state taken proper care of the environment and facilities of service, etc.


The state would be held liable only for those cases where the harm has been caused due to negligence of state employed doctor or any other staff member, if the facilities, food or medicines provided were faulty and not proper. If the harm caused could have been prevented if the state would have been careful enough. Proving the liability of state in medical negligence by human doctors is much easier in comparison with the Robot doctors. The court has many precedents to refer to for medical negligence by doctors but what would be the course of action or who would be liable if there is any medical negligence by Artificial intelligence programmed machines. Let’s discuss how the state would be made liable for the negligence by types of machinery appointed by the state.


As we have established before that the liability in case of negligence by AI would depend on different circumstances, the same would apply in case of the State’s liability. The state would be made liable for the negligence carried out by AI, which has resulted in harm to others only if the failure has occurred due to lack of care on the part of the regulator of AI. In a scenario where a human is supervising the actions of AI, if that human makes a mistake which led to a negligent act by the Robot, then that human would be made liable. Vicariously the state would be made accountable for his actions. If the AI has not been tested properly before using or employing and negligence occurs, the government would be made strictly liable for hiring a dangerous machine for carrying such a thorough job, which needs a lot of expertise, without any tests. The state would further be responsible for providing any facilities which can prove to be fatal if not properly conducted if those facilities prove to do any harm to someone. The state would be held jointly liable with the manufacturer of the AI if the negligence was of a type that could have been deciphered by the government while testing the programs. The government has to take a reasonable standard of care before employing AI and ensure that the program does not lead to injury to people who enjoy the facility. In the medical field, extra care would have to be taken by the state to ensure that the AI is perfectly able to be used and is approved by experts to work before letting the government hospitals use it. When a citizen comes to enjoy a facility provided by the state, an assumption is made by him that the government would definitely offer the facility only if it is suitable and secure. The state must respect that assumption and make sure the appropriate implication of the AI in the services provided. Proving liability in the case of AI is very difficult because it is almost impossible to determine whether the fault was due to an inherent defect complicated in the machine itself or the faulty operation of the program or it was an inevitable accident.


CONCLUSION

We cannot neglect the fact that AI’s use is growing further and establishing deep roots in our society, it's just going to increase in the coming years, and therefore, we have documented ways to tackle the problem of legal liability. Especially in the medical field, AI has proved to be a lot beneficial for human beings. We have to adapt ourselves to the upcoming world of AI and amend our laws accordingly. We need to make new laws, give AI space in our constitution; the rights of citizens against AI has to be recognized. The state needs to establish regulations for the use of AI. AI is no more a new concept, in upcoming years, it will become a part of our lives, and the basic need of any new entity is a law that determines its rights, duties, and liabilities. Therefore, AI has to be recognized by law, and its responsibilities have to be determined.

bottom of page