Too often legislators’ resort to umbrella terms to address a broad spectrum of applications. Artificial intelligence, robotics, platforms are all extremely broad and insufficiently defined terms in a technological perspective, encompassing a wide array of extremely diversified applications. At the same time, given the diffusion of said terms in the public discourse, the temptation to address them unitarily, and so regulate them as such is very strong, due to its apparent simplicity. However, if — as of today — capital markets, toothbrushes, cars, the medical and legal professions are regulated separately — even with respect to a sufficiently narrowly defined issue such as liability —, why that ought to change in the future, simply because — extremely different — AI-based applications might be deployed in those domains, remains unclear. Most certainly, the policy argument to support these kinds of intervention never addresses this concern openly or directly. The paper will challenge such an approach by considering the recent proposal of the European Parliament on the regulation of civil liability, by discussing its technology neutral approach. In particular, it will address the notion of “AI-system” (AIS) and its effect on legal certainty, the — artificial and insufficient — distinction between high- and low-risk applications, and the corresponding different regime of liability proposed. It will demonstrate, through examples (e.g.: medical malpractice) how this could profoundly affect incentives towards early adoption of more advanced applications, by increasing the burden upon the defendants/deployers of the application (that might include the medical practitioner himself). It will discuss the notion of « operators », to show how this will increase the number of potential responsible parties, causing relevant uncertainty with respect to secondary litigation (and regress claims), and limitations to compensable damage. All such aspects will be analysed per se, and as elements to challenge a technology neutral approach to the regulation of emerging AI-based systems, in general and within the field of civil liability in particular. Finally, a policy argument will be grounded, whereby civil liability in Advanced Technologies should primarily address victim compensation and not product safety. That, instead, ought to be addressed through standards and product safety regulation.

Artificial Intelligence does not exist! Defying the technology-neutrality narrative in the regulation of civili liability for advanced technologies

Andrea Bertolini
2022-01-01

Abstract

Too often legislators’ resort to umbrella terms to address a broad spectrum of applications. Artificial intelligence, robotics, platforms are all extremely broad and insufficiently defined terms in a technological perspective, encompassing a wide array of extremely diversified applications. At the same time, given the diffusion of said terms in the public discourse, the temptation to address them unitarily, and so regulate them as such is very strong, due to its apparent simplicity. However, if — as of today — capital markets, toothbrushes, cars, the medical and legal professions are regulated separately — even with respect to a sufficiently narrowly defined issue such as liability —, why that ought to change in the future, simply because — extremely different — AI-based applications might be deployed in those domains, remains unclear. Most certainly, the policy argument to support these kinds of intervention never addresses this concern openly or directly. The paper will challenge such an approach by considering the recent proposal of the European Parliament on the regulation of civil liability, by discussing its technology neutral approach. In particular, it will address the notion of “AI-system” (AIS) and its effect on legal certainty, the — artificial and insufficient — distinction between high- and low-risk applications, and the corresponding different regime of liability proposed. It will demonstrate, through examples (e.g.: medical malpractice) how this could profoundly affect incentives towards early adoption of more advanced applications, by increasing the burden upon the defendants/deployers of the application (that might include the medical practitioner himself). It will discuss the notion of « operators », to show how this will increase the number of potential responsible parties, causing relevant uncertainty with respect to secondary litigation (and regress claims), and limitations to compensable damage. All such aspects will be analysed per se, and as elements to challenge a technology neutral approach to the regulation of emerging AI-based systems, in general and within the field of civil liability in particular. Finally, a policy argument will be grounded, whereby civil liability in Advanced Technologies should primarily address victim compensation and not product safety. That, instead, ought to be addressed through standards and product safety regulation.
2022
File in questo prodotto:
File Dimensione Formato  
ARTIFICIAL INTELLIGENCE DOES NOT EXIST (002).pdf

solo utenti autorizzati

Tipologia: PDF Editoriale
Licenza: Copyright dell'editore
Dimensione 487.49 kB
Formato Adobe PDF
487.49 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11382/552651
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
social impact