In order to deal with the growing problem of AI-generated images, the Google DeepMind team today launched a tool called SynthID, which can embed watermarks invisible to human eyes in AI-generated images, but It can be detected with specialized artificial intelligence detection tools to distinguish between real and fake content. The tool is already available to select customers on Google Cloud Platform.
The principle of SynthID is to embed a watermark in the pixels of the image without affecting the quality and experience of the image itself. The watermark can resist common image editing operations such as cropping and scaling, and can only be identified by specialized AI detection tools. Demis Hassabis, CEO of Google DeepMind, said that this watermark will become more subtle and powerful as the AI model improves.
SynthID tool tells you how likely it is that an image is AI-generated
Hassabis said SynthID was developed to address potentially dangerous issues such as deepfakes. He believes that the establishment of AI image recognition systems is very important in the context of the upcoming general elections in the United States and the United Kingdom in 2024. He also said that the basic idea of SynthID can also be applied to other media such as video and text.
Currently, SynthID is only available on Google Cloud Platform, primarily for customers using the Vertex AI platform and Imagen image generator. Hassabis hopes to make SynthID better by testing in real scenarios. Google will then use it in more places, share it with more partners, and increase public awareness and education.
It is noticed that in addition to Google, many other companies are also working on building AI protection and security systems. For example, companies like Meta, OpenAI, and others are using a protocol called C2PA to tag AI-generated content with encrypted metadata. Hassabis said that SynthID may become one of the Internet-wide standards. But he also emphasized that this is only an initial attempt and is not a panacea for solving the problem of deepfakes. “It’s too early to think about scaling and social discussions until we’ve proven that the technology fundamentals work in part,” he said.