Abstract:
Traditional research on AI governance often focuses on a single approach, either technological ethics or legal regulation, failing to effectively address the regulatory dilemmas arising from algorithmic autonomy, systemic complexity, and societal pervasiveness. This is particularly evident in the disjunction between theoretical supply and practical needs concerning core issues such as responsibility attribution and causality tracing. Faced with the dual challenges of reconfigured digital power and dispersed societal risks, it is necessary to systematically explore the jurisprudential construction and institutional responses of "accountability system" within AI governance, building a responsibility allocation framework adapted to the characteristics of technological risks. Through a theoretical examination of the jurisprudential definition, the essence of the accountability system as a composite governance tool can be clarified, and its evolutionary direction can be revealed from the perspective of developmental logic. At the institutional level, the liability principle must transition from fault-based liability to a capacity-based liability model, reconstructing the authority-responsibility alignment mechanism through the legal codification of technical standards. The governance structure should establish a multi-stakeholder framework capable of penetrating technological black boxes, enhancing synergies among governmental oversight, industry self-regulation, and societal supervision. Meanwhile, promoting the deep integration of technical tools (e.g., explainable AI and blockchain-based evidence preservation) with legal rules will forge a dynamic regulatory system covering the entire algorithm lifecycle.