Abstract:
The development of Generative Artificial Intelligence (Generative AI) depends fundamentally on the utilization of personal information, requiring a normative stance that actively supports technological innovation. However, Generative AI poses severe challenges to existing personal information protection norms, specifically regarding informed consent, data minimization, transparency, and information quality. To address these challenges, a collaborative governance system is constructed by employing normative adaptation to remove barriers to compliant data use, and normative safeguards to mitigate utilization risks. In terms of normative adaptation, the framework advocates shifting from absolute to relative standards for anonymization, clarifying the boundaries for the reasonable use of publicly available personal information, and implementing a tiered and classified approach to informed consent. In terms of normative safeguards, mechanisms for exercising rights are refined, including effective access to data, responsive rectification, and reasonable erasure. Furthermore, principles for liability attribution and rules for assessing damages are clarified. Beyond rights exercise and remedies, establishing a transparency disclosure mechanism based on Model Cards and strengthening system risk regulation are proposed to achieve the preventive protection of personal information.