Abstract:
Traditional research on AI governance often focuses on singular approaches, either technological ethics or legal regulation, failing to effectively address the regulatory dilemmas arising from algorithmic autonomy, systemic complexity, and societal pervasiveness. This is particularly evident in the disjunction between theoretical supply and practical needs concerning core issues such as responsibility attribution and causality tracing. Faced with the dual challenges of reconfigured digital power and diffused societal risks, it is necessary to systematically explore the jurisprudential construction and institutional responses of "accountability mechanisms" within AI governance, building a responsibility allocation framework adapted to the characteristics of technological risks. Through a theoretical examination of the jurisprudential definition, the essence of accountability mechanisms as a composite governance tool can be clarified, and their evolutionary direction revealed from the dimension of developmental logic.