Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Explainable artificial intelligence, lawyer’s perspective

Explainable artificial intelligence, lawyer’s perspective. Authors: Łukasz Górski, Shashishekar Ramakrishna ICAIL ’21: Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law June 2021 Pages 60–68 Published:21 June 2021

“Explainable artificial intelligence (XAI) is a research direction that was already put under scrutiny, in particular in the AI&Law community. Whilst there were notable developments in the area of (general, not necessarily legal) XAI, user experience studies regarding such methods, as well as more general studies pertaining to the concept of explainability among the users are still lagging behind. This paper firstly, assesses the performance of different explainability methods (Grad-CAM, LIME, SHAP), in explaining the predictions for a legal text classification problem; those explanations were then judged by legal professionals according to their accuracy. Secondly, the same respondents were asked to give their opinion on the desired qualities of (explainable) artificial intelligence (AI) legal decision system and to present their general understanding of the term XAI. This part was treated as a pilot study for a more pronounced one regarding the lawyer’s position on AI, and XAI in particular.”

Sorry, comments are closed for this post.