CIPP/Lallemand Seminar: Towards Intelligent Regulation of Artificial Intelligence
December 6, 2018 • 1:00 PM December 6, 2018 • 2:30 PM
McGill Law Faculty (3644 Peel St.) NCDH room 609
Artificial intelligence (AI) is becoming a part of our daily lives at a fast pace, offering myriad benefits for society. At the same time, there is concern about the unpredictability and uncontrollability of AI. In response, legislators and scholars call for more transparency and explainability of AI. This article considers what it would mean to require transparency of AI. It advocates looking beyond the opaque concept of AI, focusing on the concrete risks and biases of its underlying technology: machine-learning algorithms. The article discusses the biases that an algorithm may produce through its input data, its testing and its decision model. Any transparency requirement for algorithms should result in explanations of these biases that are both understandable for the prospective recipients, and technically feasible for producers. Before asking how much transparency the law should require from algorithms, this article argues that we should consider if the explanation that programmers can offer is useful in specific legal contexts.
Miriam Buiten is a Junior Professor of Law and Economics at the University of Mannheim. Her research focuses on technology law, EU consumer law and competition law and economics. Her current research projects address the liability of online platforms, regulatory requirements for artificial intelligence, and the law and economics of robo-advisors. Prior to joining the University of Mannheim, Miriam worked as a legal advisor for a Dutch law firm and as a trainee at the European Commission. She obtained a joint PhD from the Erasmus University Rotterdam and Hamburg University.
This content has been updated on November 21, 2018 at 19:00.