ISSN 1008-2204
CN 11-3979/C

XAI背景下司法人工智能的可解释性义务研究基于司法真诚的理论视角

Explainability Obligation of Judicial Artificial Intelligence in the Context of XAIA Theoretical Perspective Based on Judicial Candor

  • 摘要: 面对人工智能的“黑箱”问题,可解释人工智能(XAI)被认为是增强司法裁判领域可解释性的有效工具。然而,XAI中的可解释性含义反而背离了司法裁判领域的可解释性要求。从司法真诚的理论视角切入可解释性,可以根据“是否考量法官主观状态”的标准将司法真诚的含义分为主观真诚与客观真诚两个层面,来实现以合理理由指引行动的司法价值。研究发现,XAI事后解释的特性不符合主观真诚的要求导致其背离了司法价值,而可说明人工智能(IAI)可以符合主观层面司法真诚的要求。因此,应根据《中华人民共和国个人信息保护法》中关于自动化决策的说明义务规定,具体化司法人工智能部署者、提供者以及个案裁判法官的说明义务,保证司法裁判的关键环节使用IAI而非XAI,从而实现司法真诚的核心要求。

     

    Abstract: Facing the “black box” problem of artificial intelligence, explainable artificial intelligence (XAI) is considered an effective tool to enhance explainability in the field of judicial decision-making. However, the concept of explainability in XAI deviates from the requirements of explainability in judicial decision-making. From the perspective of judicial candor, explainability can be divided into “subjective candor” and “objective candor” based on whether the judge’s subjective state is considered, to achieve the judicial value of guiding actions with reasonable justification. The analysis reveals that XAI, due to its post-hoc explanatory nature, fails to meet the requirements of subjective candor, thus deviating from judicial values. In contrast, interpretable artificial intelligence (IAI) can inherently fulfill the requirements of judicial candor at the subjective level. Therefore, in accordance with the explanation obligations for automated decision-making stipulated in the Personal Information Protection Law, the obligations of judicial AI deployers, providers, and judges in individual cases should be specified. This ensures the use of IAI rather than XAI technology in key judicial decision-making processes, thereby fulfilling the core requirements of judicial candor.

     

/

返回文章
返回