As AI-generated images become the norm in society, it is imperative that we develop a method of provenance for certain media.
The C2PA (Coalition for Content Provenance and Authenticity) is one such system. It creates a secure, tamper-evident record of a piece of content’s origin and history. When a photo is taken by a camera, edited in software, or generated entirely by an AI model, C2PA can embed verified details like who or what created it, what tools were used, and whether it has been altered. This information travels with the file itself, protected by cryptography so it can’t be easily stripped or forged without detection.
One problem with this, though, is that it cannot prevent media that is screen-recorded. This just means that provenance is only as strong as the ecosystem that supports it. C2PA works best when platforms, devices, and creators ALL PARTICIPATE in preserving and honoring the chain of trust. And while it can’t stop every form of deception, it can establish a baseline of accountability for content that moves through compliant channels that people rely on, like the news and social media platforms.
Read More: C2PA | Verifying Media Content Sources