Abstract:
AI Agent refers to software programs, information systems and other types of entities that can perceive the external environment, make autonomous decisions and perform tasks according to instructions. While this type of AI application improves the intelligence and automation of AI information services, it also brings new issues such as personal information protection: the service model of AI agents is "personalized service", so it needs to collect and process as much personal information as possible. However, due to the uncertainty of the basic functions of AI agents and the purpose of personal information processing, the minimum necessary principle based on the judgment logic of "it is indeed necessary for the purpose of processing personal information" is difficult to apply. In this scenario, the principle of minimum necessity has not failed due to its disconnection from industry practice. Rather, the existing interpretation theory focuses on emphasizing the specific requirements of "minimum" and "necessary" and ignores the personal information protection goal pointed to by this principle. The so-called “minimum” does not mean the minimum in terms of quantity, but emphasizes choosing the solution that has “the least impact on personal rights and interests” and collects the relatively smallest amount of personal information among a variety of technical solutions. The so-called "necessary" is not judged solely based on the purpose of processing, but a comprehensive judgment based on factors such as the application scenarios of AI agents, information service contracts, and technical protection measures.