201907.31
0

AI人工智慧法律議題初探-

Exploring the Legal Aspects of Artificial Intelligence

中銀律師事務所吳筱涵合夥律師

ZHONG YIN LAW FIRM Charlotte S.H. Wu

AI出錯了,誰的責任?

Who is Responsible for AI Failures?

-從無人駕駛車(自駕車)談起

When Self-Driving Car Goes Wrong

有鑑於酒駕肇事頻傳,近來各界紛紛提出不同解決方案,期待透過修法的途徑,加強對於「人」的行為的制約,以有效降低酒駕事故的發生,例如提高酒駕罰鍰、加重刑事責任,累犯甚至可處死刑[1]等。又,依國內外的調查顯示,交通意外事故的主要肇事原因,「駕駛人因素」至少占了94%以上的比例[2]

There have been more and more cases of DUI recently. In order to lower the occurrence of drunk driving, various solutions have been proposed in the hope of strengthening the regulations on “human” by amending the law. For example, to increase the penalty for DUI and even the possibility of a death sentence for recidivist. Furthermore, according to domestic and international surveys, 94% of traffic accidents are caused by “drivers”.

既然「人」是交通事故的肇事主因,那麼,如果可透過AI人工智慧科技來降低駕駛人因違規、疲勞駕駛、使用藥物或酒醉駕車,甚至於完全取代人類駕駛車輛,或許可以期待降低交通事故發生機率及事故傷亡人數。因此,目前有許多科技公司及汽車廠商紛紛投入無人駕駛(或自動駕駛[3])汽車的開發與實驗,而我國也在2018年3月由台北市政府率先宣布啟動「台北市自駕車實驗場域實驗試辦計畫[4]」,立法院也在2018年12月通過《無人載具科技創新實驗條例》[5],許多公私部門的研究計畫也估計在2020年左右,自駕車可以量產商用[6]。或許在不久的將來,我們就能在道路上看到汽車座位的駕駛人不再手握方向盤的景象,而前述因駕駛人之因素所發生的交通事故亦可望大幅減少[7]

If “human” is the main factor that causes traffic accidents, then we may be able to expect the decline in the occurrence of traffic accidents and the number of casualties if artificial intelligence is put in place to avoid violation of rules, driver fatigue, driving under the influence of drugs, alcohol and medicines, or even to replace human driver. Therefore, many tech companies and automobile manufacturers have invested in the development and experiment of self-driving cars. The Taipei City government announced the launch of the “Taipei City Autonomous Vehicle Test Field Program” on March 2018. On December 2018, legislative Yuan also passed the 《Unmanned Vehicles Technology Innovative Experimentation Act》According to the research projects of multiple public and private sectors, autonomous car can be mass-produced by around 2020. Perhaps in the near future, we can see that drivers no longer have to hold the steering wheel, and traffic accidents caused by drivers can be greatly reduced.  

不過,自動駕駛科技也並非安全無虞,Google所研發的汽車自動駕駛技術團隊進行道路測試時,處於全自動駕駛模式的汽車因閃避障礙物而變換車道後,遭到後方公車碰撞發生事故[8],這使得我們不免思考,在2020年以後的某個日子,一輛無人自動駕駛車穿梭在大街小巷時,不幸發生交通事故,導致行人被撞身亡,那麼,什麼人需要為AI出錯造成事故的傷亡,負擔起責任呢?是汽車製造商、無人駕駛系統程式開發或設計人員?抑或是車輛使用人?是AI軟體程式人員?還AI系統的開發商或是AI系統本身?

Nevertheless, self-driving cars are not flawless. During a road test of the Google self-driving car, the autonomous car collided with a transit bus from behind when it changed lanes after it detected sand bags blocking its path. This makes us wonder, some day in 2020, when a self-driving car roams the road and causes an accident in which someone dies, who is to be blamed for the failure of AI? Is it the car manufacturer? The AI programmer? The AI system developer? Or perhaps the AI itself?

尋找AI系統失靈的原因,為確立法律責任的關鍵因素之一

Finding the cause of the AI failure is the key to determining who is liable

就傳統上之民事侵權責任來說,無論是多複雜先進的機器,理論上將其相關損害結果歸諸於其背後的人類操作、設計或製造上具體行為的故意或過失等尚非難事[9]。例如:

When it comes to determining the liability of traditional torts, no matter how complex the machine is, in theory, it is not difficult to attribute the relevant damage results to the specific intentional or negligent actions related to the human operation, design and manufacture behind it.

  • 違反注意義務之過失責任;
  • Liability for negligence due to breach of duty of care;
  • 因違反契約條款明示或默示約定之違約責任;
  • Liability for breach of contract due to the violation of implied or express terms;
  • 消費者保護責任中,因商品或服務之缺陷所造成損害賠償責任。
  • Liability for damage caused by defects in goods or services within consumer protection liability

    由於參與AI系統的人員來自四面八方,包括,數據資料提供者、設計者、製造商、程式設計師、軟體開發者、用戶及AI系統本身。因此,當AI發生問題時,並不容易確認責任主體,而須一併參酌多方面的因素而定,例如:

     Those involved in AI have diverse backgrounds. There are data providers, designers, manufacturers, programmers, software developers, users, and the AI system itself. As a result, when AI goes wrong, it is hard to determine who is responsible, it is necessary to look into various factors, for example:

然而,在AI人工智慧日趨成熟而具有高度自主性(autonomy)的情況下,此時在認定相關損害賠償責任則有相當困難存在[10]。舉例來說,當軟體有瑕疵,或者因軟體之使用以致他人受損害,則過失責任之成立要件,主要為:(1)對於損害的發生,負有注意義務;(2)違反該注意義務;(3)因而導致他方受到損害,且注意義務的違反與損害之間具有因果關係。

However, as Artificial Intelligence becomes more mature and has high autonomy, it is quite difficult to determine the liability for damage at this point. For example, when there is a flaw in the software, or when the use of the software causes damage to others, then the requirements for the establishment of fault liability are as follows: (1) Owe a duty of care with regards to the occurrence of damage; (2) Breach of that duty of care; (3) Cause harm to another party, and there is a cause and effect relationship between the breach of duty of care and the damage.

如果軟體供應商對消費者/使用人負有注意義務,那麼,AI軟體商的注意義務為何呢?以什麼標準來確立其注意義務?另一方面,在AI系統的決策過程中,如果已完全由AI自主,那麼,消費者或使用人是否仍然也負有注意義務呢?

If the software supplier owes a duty of care to the consumer/user, then what should the AI software supplier pay attention to? What criteria is used to establish its duty of care? On the other hand, during the decision-making process, if the AI is in complete control, then are the consumers or users still obligated to pay attention?

舉例來說,美國在2016年公布的《聯邦自動駕駛車政策(Federal Automated Vehicles Policy)》,將汽車依照自動化程度,區分為六個等級:「無自動化(no automation)」、「駕駛人之輔助(driver assistance)」、「部分自動駕駛(partial automation)」、「有條件自動駕駛(conditional automation)」、「高度自動駕駛(high automation)」、「全自動駕駛(full automation)」等。若自駕車已經達到「高度或全自動駕駛」的程度,則人類的角色已經由傳統的駕駛人轉變為乘客的角色,而無須介入行車環境的監控,在此情況下,如課予乘客注意義務,或者成立其與損害間的因果關係,就有可能發生爭議[11]

For example, according to the《Federal Automated Vehicles Policy》announced in 2016 by the United States, automobile can be divided into six grades by its level of autonomy: “no automation”, “driver assistance”, “partial automation”, “conditional automation”, “high automation”, “full automation” etc. If the self-driving car has reached “high or full automation”, then the roll of human changes from the traditional driver to passenger, and thus does not need to interfere in the monitoring of the driving environment. In this case, if the passenger if obligated to pay attention, or if there is a cause and effect relationship between the passenger and the damage, then there may be controversies.

若AI軟體系統違反注意義務,可能會有許多種情形。例,開發人員對於程式運作之錯誤具有偵測的能力;AI的知識基礎不正確或不適當;AI的警示或紀錄是不正確或不適當的;沒有及時更新數據;使用人輸入錯誤;使用人過度依賴AI系統之輸出結果[12];或者使用人出於錯誤的目的而使用程式等。那麼,在什麼情況下,我們可以認為是AI系統違反了注意義務呢?因為AI系統的發展越來越像是一個「黑箱」,AI的所有決策過程都存在於這個「黑箱」中,自駕車的AI系統自不例外[13]

If the AI software system breaches duty of care, there may be several scenarios. For example, the developer has the ability to detect errors in the operation of the program; The knowledge on AI is incorrect or inappropriate; The warning or record of the AI is incorrect or inappropriate; The data is not updated in time; The input error of the user; The user excessively relies on the output of the AI system; Or the user users the program out of the wrong purpose. Then in what situation, can we determine that it is the AI system that breaches duty of care? Since the development of the AI system is more and more like a black-box operation, all AI decisions are made in this “black box”, same goes for the AI system for self-driving cars.

再者,AI系統是不是可以被認為是導致損害的原因,仍然有許多討論空間。例如,AI系統是不是對於特定狀況或情境,作出「建議某個行為(recommends an action)」,例如專家系統(expert system),或者作出「採取行動(take an action)」,例如自駕車。在前者的情形,至少會牽涉到一項其他代理機制而不容易證明因果關係;至於後者的情形,就相對容易判斷[14]

In addition, there is still room for discussion on whether or not the AI system can be considered the cause of the damage. For example, does the AI system “recommends an action” such as expert system or “takes an action” for certain situations or scenarios? In the former case, at least one other agency mechanism is involved, making the cause and effect relationship hard to prove; As for the latter one, it is relatively easy to determine.

此外,由於AI的決策與行動已非單純被動式依賴人類預先設定的規則與指令,而是擷取出廠後與周遭環境的訊息以進行分析與決策,因而機器運作的結果往往會超出製造商或者設計者出廠時的設定[15],在這樣的情形下,如果損害的發生,即使是由AI系統的行為所致,但因為欠缺可預見性(foreseeability),也很可能會對受害人在向車輛製造廠商求償上構成障礙,甚至很可能沒有任何人需要負責。

Furthermore, the decision-making and actions of the AI does not merely rely on the rules and instructions set by human, but by analyzing and making decisions based on the information from the surrounding environment. As a result, the operation of the machine often exceeds the original settings made by the manufacturer or designer. In this case, when damage occurs, it is caused by the behavior of the AI system because of the lack of foreseeability, causing hindrance for the victim to seek compensation from the automobile manufacturer. What’s more, it may be possible that no one needs to take responsibility.

雖有論者認為,受害人可藉由危險責任(strict liability)向車輛製造廠商求償,但危險責任的成立,多與相關事業活動本身即存在危險性有關,為了平衡當事人利益及分配社會風險而課予義務,但自駕車的設計理念,既然是為了減少或避免因人類駕駛疏失所造成的交通事故人員傷亡,因此,危險責任是否適用於自駕車的場合仍有疑義[16]

Although some believe that the victims can claim compensation from automobile manufacturer through strict liability. However, the establishment of strict liability is related to the risk that the business activity itself has. The obligation is meant to balance the interest of the party and to distribute social risks. Yet the concept of self-driving car is to reduce or prevent the casualties due to traffic accidents caused by human driver. Thus, whether or not strict liability can be applied to self-driving cars is still in question.

有鑑於風險分配的方式,將可能影響相關產業創新的誘因,從而,如何在鼓勵創新、保護消費者及公共安全之間平衡兼顧,考驗著各國政府的智慧,值得我們重視。目前人工智慧不止應用在汽車產業,也擴及很多其他相關產業,如醫療器材產業、零售產業……等,台灣自許在人工智慧的未來領域中要能夠佔得一席之地,那麼亦應同步建立合理合適的法規制度環境。人工智慧應用可能產生的法律問題,在不久的將來很可能發生,值得我們持續的關注、瞭解、與重視。

The way risks are distributed may affect the incentives of related industry to innovate. Therefore, to strike a balance between encouraging innovation and protecting consumer and public safety is a vital issue that tests the wisdom of governments from all countries and requires our attention. So far, artificial intelligence is not only used in the automobile industry, but in many other related industries as well, for example, the medical equipment industry, the retail industry etc. Should Taiwan wish to gain a foothold in the field of artificial intelligence, suitable laws and regulations should be in place simultaneously. The legal issues of artificial intelligence may arise in the near future, and it requires our continued attention, understanding and should be taken seriously.

如須法律諮詢與服務,歡迎洽詢:

If you need legal counsel or services, you are welcome to contact us:

吳筱涵律師Email:charlotte.wu@zhongyinlawyer.com.tw

Charlotte Wu, Attorney-at-law Email: charlotte.wu@zhongyinlawyer.com.tw


[1] Abby Huang,【酒駕修法】法務部草案出爐:累犯最重可判死刑,車輛視同「犯罪工具」當場沒收,關鍵評論,2019年2月27日, https://www.thenewslens.com/article/114571 (最後瀏覽日:04/19/2019)

Abby Huang【Drunk driving law amendment】Draft from the Ministry of Justice: Recidivist may face death penalty, the automobile is regarded as “guilty tool” and can be confiscated on the spot. The News Lens, 2019/02/27, https://www.thenewslens.com/article/114571 (Last visited: 04/19/2019)

[2] 內政部警政署,《道路交通事故死亡人數與國際比較分析》,頁5,2015年11月,https://www.npa.gov.tw/NPAGip/wSite/public/Attachment/f1475544437668.pdf (最後瀏覽日:04/19/2019);National Highway Traffic Safety Administration, Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey (2015), available at https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812115. (last visited: Apr.19,2019)  

[3] 『「自動駕駛」是指,在不需人類即時(real-time)輸入控制指令情況下,由電腦自行操控車輛以執行加速、煞車或轉彎等任務,而具備自動駕駛能力之動力車輛即屬於自動駕駛車輛,自動駕駛車輛科技的最終目標是使汽車能夠在沒有駕駛者或乘客的情況下自行駕駛,其可能之優點可能有增加交通安全性、提升能源使用效率,並使老人、身心障礙者與小孩能自行移動,但缺點亦同樣存在,例如自動駕駛汽車決定並控制車輛行駛路線時,將進而影響使用者的行動自主權。』,參前揭註7,頁2-3。

“Self-driving” means the car is controlled by the computer to perform tasks such as acceleration, braking or turning without the need for human to input real-time control commands. Powered car with self-driving ability is autonomous car. The ultimate goal of the autonomous car is to enable the car to drive on its own without driver or passenger. The possible benefits may include the increase of traffic safety, improving energy efficiency, allowing the elderly, the physically challenged and children to move around. However, there are also shortcomings. For example, when autonomous car determines and controls the route, it further affects the autonomy of the user. Refer to note 7, page 2-3

[4] 臺北市政府,〈臺北市智慧城市產業場域實驗試辦計畫〉,https://smartcity.taipei/posts/16;〈全台首個自駕車擬真實證場域在台北!今首度對外公開〉,2018年3月13日,匯流新聞網,https://cnews.com.tw/002180313000abc/ (最後瀏覽日:04/19/2019)。

Taipei City Government, 〈Taipei Smart City Industrial Field Pilot Program〉https://smartcity.taipei/posts/16;〈First Ever Autonomous Vehicle Test Field in Taipei! Opens to the Public Today〉, 2018/03/13, CNEWS, https://cnews.com.tw/002180313000abc/ (Last visited: 04/19/2019).

[5] 惟本條例實施日期,尚待行政院公告,參,全國法規資料庫,https://law.moj.gov.tw/LawClass/LawAll.aspx?pcode=J0030147 (最後瀏覽日:04/19/2019)。

The implementation date of this regulation is subject to the announcement of the Executive Yuan, refer to the Laws & Regulations Database of The Republic of China, https://law.moj.gov.tw/LawClass/LawAll.aspx?pcode=J0030147 (Last visited: 04/19/2019).

[6] 〈AI與法律、哲學、社會議題跨領域對談[自駕車場次]〉,頁93,載於:《人文與社會科學簡訊》,20卷1期,107年12月。

〈AI cross discipline discussion with legal, philosophical, and social issues [self-driving car]〉, page 93, in: 《Humanities and Social Sciences Newsletter》, 20-1 volume, Dec 2018.

[7] 董啟忠,《自動駕駛車輛交通事故侵權責任之研究》,頁2,國立高雄科技大學科技法律研究所碩士論文(2018);林妍溱(2016),〈美國研究:自駕車意外發生率少於一般車輛〉,2016年1月1日,iThome,https://www.ithome.com.tw/news/102986 (最後瀏覽日:04/19/2019)。

TUNG,CHI-CHUNG,《Study on Tort Liability of Traffic Accidents by Self-driving Car》, page 2, Master’s thesis, Graduate Institute of Science and Technology Law, National Kaohsiung University of Science and Technology (2018); LIN,YEN-CHEN (2016), 〈US Study: Self-driving Car Accidents are Less Frequent than General Vehicles〉2016/01, iThome, https://www.ithome.com.tw/news/102986 (Last visited: 04/19/2019).

[8] 陳曉莉,〈Google自駕車首度擔負車禍肇事責任〉,https://www.ithome.com.tw/news/104210 (最後瀏覽日:04/19/2019)。

CHEN,HSIAO-LI, 〈Google Self-Driving Car Responsible for Traffic Accident for the First Time〉https://www.ithome.com.tw/news/104210 (Last visited: 04/19/2019)

[9]林勤富,楊漢威(2018),〈人工智慧法律議題初探〉,《月旦法學雜誌》,頁205,第274期,2018年3月。

LIN,CHIN-FU, YANG,HAN-WEI (2018), 〈The exploration of the Legal Aspects of Artificial Intelligence〉, 《The Taiwan Law Review》, page 205, Volume 274, March 2018

[10] 同前註。

Same as the previous note.

[11] 同前揭註9。

Same as note 9

[12] Kingston, J. (2018). Artificial Intelligence and Legal Liability. arXiv preprint arXiv:1802.07782.

[13] 〈AI看穿了人們的喜怒研樂,但人類卻看不透AI的「黑箱作業」〉,2018年7月17日,科技報橘https://buzzorange.com/techorange/2018/07/17/ai-facial-recognition-emotion-detect-tech/ (最後瀏覽日:04/19/2019)。

〈AI reads the expression of human, yet human cannot understand the “black-box operation” of AI〉,2018/07/17, TechOrgange https://buzzorange.com/techorange/2018/07/17/ai-facial-recognition-emotion-detect-tech/ (Last visited:04/19/2019)。

[14] Supra Note 12 at 5-6.

[15] 參前揭註9,頁206。

Refer to note 9, page 205

[16] 同前註。

Same as the previous note