China has recently proposed the creation of a blacklist for training data used in generative artificial intelligence (AI) models. This move aims to regulate the development and use of AI technologies, particularly those that generate text, images, or videos.
The proposal comes as concerns grow over the potential misuse of AI-generated content, such as deepfakes or misleading information. By creating a blacklist, China intends to restrict the use of certain types of data that could be deemed harmful or inappropriate.
The blacklist would include data that violates laws and regulations, infringes on individuals’ rights, or threatens national security. It would also encompass content that promotes violence, terrorism, obscenity, or any other form of illegal or harmful behavior.
China’s proposal emphasizes the need for AI developers and researchers to ensure the quality and integrity of training data. It aims to prevent the creation and dissemination of AI-generated content that could mislead or deceive the public.
The implementation of this blacklist would require AI developers to carefully vet and filter their training