Back to Home

Google SynthID: How AI Watermarks Work and How to Check for Them

Published April 2, 2026 by Which One is AI Team

As AI-generated images become harder to distinguish from real photographs, the technology industry and governments have turned to a different approach: marking AI content at the source. Google's SynthID is the most prominent example of this approach, embedding invisible watermarks directly into AI-generated images. This guide explains how the technology works, how you can check for watermarks, and what the broader landscape of AI content labeling looks like today.

What Is SynthID?

SynthID is an invisible watermarking system developed by Google DeepMind. It embeds a digital signal directly into the pixels of an image at the moment of generation. This signal is imperceptible to the human eye: a watermarked image looks identical to a non-watermarked version. However, a specialized detection tool can read the embedded signal and determine that the image was generated by an AI system.

The key innovation of SynthID is its resilience. Traditional visible watermarks (like a logo stamped on an image) can be cropped or painted over. SynthID's invisible watermark is designed to survive common image modifications including resizing, cropping, compression, color adjustments, and screenshot capture. The watermark is distributed across the entire image rather than concentrated in one area, making it extremely difficult to remove without degrading the image quality.

How Invisible Watermarking Works

At a technical level, SynthID works by making subtle modifications to the pixel values of an image during the generation process. These modifications are carefully chosen so that they are invisible to human viewers but form a detectable pattern when analyzed by the SynthID detection algorithm.

Think of it like a message written in invisible ink. You cannot see it with your eyes, but if you apply the right chemical (or in this case, the right algorithm), the message becomes readable. The watermark encodes information about the image's origin, confirming that it was produced by a specific AI system.

The detection system returns a confidence score rather than a simple yes or no answer. It indicates how likely it is that the image contains a SynthID watermark, accounting for the possibility that image modifications may have partially degraded the signal.

How to Check for SynthID Watermarks

Currently, SynthID detection is not available as a standalone public tool. However, you can check for SynthID watermarks through Google's own products:

  1. Google Gemini: When you share an image with Google Gemini and ask about its origin, Gemini can indicate whether the image contains a SynthID watermark. This works for images generated by Google's own AI tools (such as Imagen through Gemini).
  2. Google Search: Google's "About this image" feature in search results may indicate whether an image has been identified as AI-generated, partly based on watermark detection.
  3. Metadata inspection: Images generated by Google's tools may also carry IPTC metadata tags indicating AI generation, which you can check using any EXIF/metadata viewer.

Limitations of SynthID

While SynthID represents an important step forward, it has significant limitations that you should understand:

The C2PA Standard: A Broader Approach

Beyond invisible watermarking, the Coalition for Content Provenance and Authenticity (C2PA) has developed an open standard for content credentials. Rather than hiding information in pixels, C2PA attaches a cryptographically signed manifest to an image file that records its complete history: what tool created it, when it was created, and what edits have been applied.

C2PA credentials work like a digital chain of custody. Each step in an image's life (creation, editing, export) is recorded and signed. If someone modifies the image without updating the credentials, the chain breaks, alerting viewers that the content has been altered.

Major companies including Adobe, Microsoft, Intel, and the BBC have adopted C2PA. Adobe's Content Credentials system, built on C2PA, is already integrated into Photoshop and Firefly. You can verify C2PA credentials using the Content Authenticity Initiative's verification tool at contentcredentials.org/verify.

The California AI Transparency Act (SB 942)

Legislation is beginning to catch up with technology. California's AI Transparency Act (SB 942) represents one of the most significant regulatory efforts to mandate AI content labeling. The law requires large AI providers to:

While the law applies specifically to providers operating in California, its impact is expected to be much broader, as major AI companies serve users worldwide and may adopt these practices across all their products rather than maintaining separate systems for different jurisdictions.

The Future of AI Watermarking

The landscape of AI content labeling is evolving rapidly. Several trends are shaping where this technology is headed:

Until universal watermarking becomes a reality, visual detection skills remain essential. Watermarks only help when the AI provider has implemented them. For images from unknown sources, your ability to spot visual tells is still your most reliable tool.

To learn more about where AI detection technology is heading, read our article on the future of AI detection. For practical tips on spotting AI images with your own eyes, see our guide on how to spot AI-generated images.

Practice Your Detection Skills

Put what you learned into practice. Download Which One is AI? and test yourself.

Download on the App Store Get it on Google Play