Exploring the Legal Aspects of Artificial Intelligence
ZHONG YIN LAW FIRM Charlotte S.H. Wu
Who is Responsible for AI Failures?
When Self-Driving Car Goes Wrong
There have been more and more cases of DUI recently. In order to lower the occurrence of drunk driving, various solutions have been proposed in the hope of strengthening the regulations on “human” by amending the law. For example, to increase the penalty for DUI and even the possibility of a death sentence for recidivist. Furthermore, according to domestic and international surveys, 94% of traffic accidents are caused by “drivers”.
If “human” is the main factor that causes traffic accidents, then we may be able to expect the decline in the occurrence of traffic accidents and the number of casualties if artificial intelligence is put in place to avoid violation of rules, driver fatigue, driving under the influence of drugs, alcohol and medicines, or even to replace human driver. Therefore, many tech companies and automobile manufacturers have invested in the development and experiment of self-driving cars. The Taipei City government announced the launch of the “Taipei City Autonomous Vehicle Test Field Program” on March 2018. On December 2018, legislative Yuan also passed the 《Unmanned Vehicles Technology Innovative Experimentation Act》According to the research projects of multiple public and private sectors, autonomous car can be mass-produced by around 2020. Perhaps in the near future, we can see that drivers no longer have to hold the steering wheel, and traffic accidents caused by drivers can be greatly reduced.
Nevertheless, self-driving cars are not flawless. During a road test of the Google self-driving car, the autonomous car collided with a transit bus from behind when it changed lanes after it detected sand bags blocking its path. This makes us wonder, some day in 2020, when a self-driving car roams the road and causes an accident in which someone dies, who is to be blamed for the failure of AI? Is it the car manufacturer? The AI programmer? The AI system developer? Or perhaps the AI itself?
Finding the cause of the AI failure is the key to determining who is liable
When it comes to determining the liability of traditional torts, no matter how complex the machine is, in theory, it is not difficult to attribute the relevant damage results to the specific intentional or negligent actions related to the human operation, design and manufacture behind it.
- Liability for negligence due to breach of duty of care;
- Liability for breach of contract due to the violation of implied or express terms;
- Liability for damage caused by defects in goods or services within consumer protection liability
Those involved in AI have diverse backgrounds. There are data providers, designers, manufacturers, programmers, software developers, users, and the AI system itself. As a result, when AI goes wrong, it is hard to determine who is responsible, it is necessary to look into various factors, for example:
However, as Artificial Intelligence becomes more mature and has high autonomy, it is quite difficult to determine the liability for damage at this point. For example, when there is a flaw in the software, or when the use of the software causes damage to others, then the requirements for the establishment of fault liability are as follows: (1) Owe a duty of care with regards to the occurrence of damage; (2) Breach of that duty of care; (3) Cause harm to another party, and there is a cause and effect relationship between the breach of duty of care and the damage.
If the software supplier owes a duty of care to the consumer/user, then what should the AI software supplier pay attention to? What criteria is used to establish its duty of care? On the other hand, during the decision-making process, if the AI is in complete control, then are the consumers or users still obligated to pay attention?
舉例來說，美國在2016年公布的《聯邦自動駕駛車政策(Federal Automated Vehicles Policy)》，將汽車依照自動化程度，區分為六個等級：「無自動化(no automation)」、「駕駛人之輔助(driver assistance)」、「部分自動駕駛(partial automation)」、「有條件自動駕駛(conditional automation)」、「高度自動駕駛(high automation)」、「全自動駕駛(full automation)」等。若自駕車已經達到「高度或全自動駕駛」的程度，則人類的角色已經由傳統的駕駛人轉變為乘客的角色，而無須介入行車環境的監控，在此情況下，如課予乘客注意義務，或者成立其與損害間的因果關係，就有可能發生爭議。
For example, according to the《Federal Automated Vehicles Policy》announced in 2016 by the United States, automobile can be divided into six grades by its level of autonomy: “no automation”, “driver assistance”, “partial automation”, “conditional automation”, “high automation”, “full automation” etc. If the self-driving car has reached “high or full automation”, then the roll of human changes from the traditional driver to passenger, and thus does not need to interfere in the monitoring of the driving environment. In this case, if the passenger if obligated to pay attention, or if there is a cause and effect relationship between the passenger and the damage, then there may be controversies.
If the AI software system breaches duty of care, there may be several scenarios. For example, the developer has the ability to detect errors in the operation of the program; The knowledge on AI is incorrect or inappropriate; The warning or record of the AI is incorrect or inappropriate; The data is not updated in time; The input error of the user; The user excessively relies on the output of the AI system; Or the user users the program out of the wrong purpose. Then in what situation, can we determine that it is the AI system that breaches duty of care? Since the development of the AI system is more and more like a black-box operation, all AI decisions are made in this “black box”, same goes for the AI system for self-driving cars.
再者，AI系統是不是可以被認為是導致損害的原因，仍然有許多討論空間。例如，AI系統是不是對於特定狀況或情境，作出「建議某個行為(recommends an action)」，例如專家系統(expert system)，或者作出「採取行動(take an action)」，例如自駕車。在前者的情形，至少會牽涉到一項其他代理機制而不容易證明因果關係；至於後者的情形，就相對容易判斷。
In addition, there is still room for discussion on whether or not the AI system can be considered the cause of the damage. For example, does the AI system “recommends an action” such as expert system or “takes an action” for certain situations or scenarios? In the former case, at least one other agency mechanism is involved, making the cause and effect relationship hard to prove; As for the latter one, it is relatively easy to determine.
Furthermore, the decision-making and actions of the AI does not merely rely on the rules and instructions set by human, but by analyzing and making decisions based on the information from the surrounding environment. As a result, the operation of the machine often exceeds the original settings made by the manufacturer or designer. In this case, when damage occurs, it is caused by the behavior of the AI system because of the lack of foreseeability, causing hindrance for the victim to seek compensation from the automobile manufacturer. What’s more, it may be possible that no one needs to take responsibility.
Although some believe that the victims can claim compensation from automobile manufacturer through strict liability. However, the establishment of strict liability is related to the risk that the business activity itself has. The obligation is meant to balance the interest of the party and to distribute social risks. Yet the concept of self-driving car is to reduce or prevent the casualties due to traffic accidents caused by human driver. Thus, whether or not strict liability can be applied to self-driving cars is still in question.
The way risks are distributed may affect the incentives of related industry to innovate. Therefore, to strike a balance between encouraging innovation and protecting consumer and public safety is a vital issue that tests the wisdom of governments from all countries and requires our attention. So far, artificial intelligence is not only used in the automobile industry, but in many other related industries as well, for example, the medical equipment industry, the retail industry etc. Should Taiwan wish to gain a foothold in the field of artificial intelligence, suitable laws and regulations should be in place simultaneously. The legal issues of artificial intelligence may arise in the near future, and it requires our continued attention, understanding and should be taken seriously.
If you need legal counsel or services, you are welcome to contact us:
Charlotte Wu, Attorney-at-law Email: email@example.com
Abby Huang【Drunk driving law amendment】Draft from the Ministry of Justice: Recidivist may face death penalty, the automobile is regarded as “guilty tool” and can be confiscated on the spot. The News Lens, 2019/02/27, https://www.thenewslens.com/article/114571 (Last visited: 04/19/2019)
 內政部警政署，《道路交通事故死亡人數與國際比較分析》，頁5，2015年11月，https://www.npa.gov.tw/NPAGip/wSite/public/Attachment/f1475544437668.pdf (最後瀏覽日：04/19/2019)；National Highway Traffic Safety Administration, Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey (2015), available at https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812115. (last visited: Apr.19,2019)
“Self-driving” means the car is controlled by the computer to perform tasks such as acceleration, braking or turning without the need for human to input real-time control commands. Powered car with self-driving ability is autonomous car. The ultimate goal of the autonomous car is to enable the car to drive on its own without driver or passenger. The possible benefits may include the increase of traffic safety, improving energy efficiency, allowing the elderly, the physically challenged and children to move around. However, there are also shortcomings. For example, when autonomous car determines and controls the route, it further affects the autonomy of the user. Refer to note 7, page 2-3
Taipei City Government, 〈Taipei Smart City Industrial Field Pilot Program〉https://smartcity.taipei/posts/16；〈First Ever Autonomous Vehicle Test Field in Taipei! Opens to the Public Today〉, 2018/03/13, CNEWS, https://cnews.com.tw/002180313000abc/ (Last visited: 04/19/2019).
 惟本條例實施日期，尚待行政院公告，參，全國法規資料庫，https://law.moj.gov.tw/LawClass/LawAll.aspx?pcode=J0030147 (最後瀏覽日：04/19/2019)。
The implementation date of this regulation is subject to the announcement of the Executive Yuan, refer to the Laws & Regulations Database of The Republic of China, https://law.moj.gov.tw/LawClass/LawAll.aspx?pcode=J0030147 (Last visited: 04/19/2019).
〈AI cross discipline discussion with legal, philosophical, and social issues [self-driving car]〉, page 93, in: 《Humanities and Social Sciences Newsletter》, 20-1 volume, Dec 2018.
TUNG,CHI-CHUNG,《Study on Tort Liability of Traffic Accidents by Self-driving Car》, page 2, Master’s thesis, Graduate Institute of Science and Technology Law, National Kaohsiung University of Science and Technology (2018); LIN,YEN-CHEN (2016), 〈US Study: Self-driving Car Accidents are Less Frequent than General Vehicles〉2016/01, iThome, https://www.ithome.com.tw/news/102986 (Last visited: 04/19/2019).
CHEN,HSIAO-LI, 〈Google Self-Driving Car Responsible for Traffic Accident for the First Time〉https://www.ithome.com.tw/news/104210 (Last visited: 04/19/2019)
LIN,CHIN-FU, YANG,HAN-WEI (2018), 〈The exploration of the Legal Aspects of Artificial Intelligence〉, 《The Taiwan Law Review》, page 205, Volume 274, March 2018
Same as the previous note.
Same as note 9
 Kingston, J. (2018). Artificial Intelligence and Legal Liability. arXiv preprint arXiv:1802.07782.
 〈AI看穿了人們的喜怒研樂，但人類卻看不透AI的「黑箱作業」〉，2018年7月17日，科技報橘https://buzzorange.com/techorange/2018/07/17/ai-facial-recognition-emotion-detect-tech/ (最後瀏覽日：04/19/2019)。
〈AI reads the expression of human, yet human cannot understand the “black-box operation” of AI〉，2018/07/17, TechOrgange https://buzzorange.com/techorange/2018/07/17/ai-facial-recognition-emotion-detect-tech/ (Last visited：04/19/2019)。
 Supra Note 12 at 5-6.
Refer to note 9, page 205
Same as the previous note