Google SynthID Adding Invisible Watermarks to AI-Generated Content

Share This Post

Google has launched new technology to embed watermarks and flag AI-generated content as part of a push to address growing challenges in verifying the authenticity of generative AI outputs across text, images, audio, and video.

The system, called SynthID, applies deep learning algorithms to watermark content produced by Google’s Gemini and Lyria AI tools and the company claims the watermarks remain detectable even after modifications like cropping, filtering, color adjustments, and compression.

The technology, currently in beta and integrated with existing gen-AI products, can also scan content to determine whether parts were generated by AI, Google said.

“SynthID’s watermarking technique is imperceptible to humans but detectable for identification,” the company declared.

The idea is to automate the discovery and tagging of AI-generated content at scale to prevent the misuse of AI-generated content for deepfakes, misinformation or financial fraud.  

Watermarking technologies for AI-generated content have not yet been adopted in production systems because of stringent quality, detectability and computational efficiency requirements but Google believes its SynthID system is production-ready.

“To enable watermarking at scale, we develop an algorithm integrating watermarking with speculative sampling, an efficiency technique frequently used in production systems,” Google explained.

For text generation, Google said SynthID operates by adjusting probability scores of token selections during the AI generation process and claimed the technique can be used for as few as three sentences.

Advertisement. Scroll to continue reading.

In images and video, the technology embeds watermarks directly into pixels and individual frames and Google insists the watermarks can cropping or compression.

For audio content, SynthID converts sound waves into spectrograms before embedding watermarks, which the company says remain intact through various modifications including MP3 compression and speed adjustments.

The technology has been integrated into several Google products and released as open-source through the Google Responsible Generative AI Toolkit.

Google has also partnered with Hugging Face to make the technology available to developers.

Related: Reality Defender Banks $33M to Tackle AI Deepfakes

Related: Deepfake or Deep Fake? Unraveling the True AI Security Risks

Related: Fighting Deepfakes and Bots With Global Permissionless Blockchain Identity

Related: GetReal Labs Emerges From Stealth to Tackle Deepfakes

This post was originally published on this site

More Articles

Article

Navigating SEC Regulations In Cybersecurity And Incident Response

Free video resource for cybersecurity professionals. As 2024 approaches, we all know how vital it is to keep up to date with regulatory changes that affect our work. We get it – it’s a lot to juggle, especially when you’re in the trenches working on an investigation, handling, and responding to incidents.