Abstract:
Facing the “black box” problem of artificial intelligence, explainable artificial intelligence (XAI) is considered an effective tool to enhance explainability in the field of judicial decision-making. However, the concept of explainability in XAI deviates from the requirements of explainability in judicial decision-making. From the perspective of judicial candor, explainability can be divided into “subjective candor” and “objective candor” based on whether the judge’s subjective state is considered, to achieve the judicial value of guiding actions with reasonable justification. The analysis reveals that XAI, due to its post-hoc explanatory nature, fails to meet the requirements of subjective candor, thus deviating from judicial values. In contrast, interpretable artificial intelligence (IAI) can inherently fulfill the requirements of judicial candor at the subjective level. Therefore, in accordance with the explanation obligations for automated decision-making stipulated in the Personal Information Protection Law, the obligations of judicial AI deployers, providers, and judges in individual cases should be specified. This ensures the use of IAI rather than XAI technology in key judicial decision-making processes, thereby fulfilling the core requirements of judicial candor.