Unverified by Default
Receipts for Reality
For most of my life, I could assume something simple: if I saw a photo or video online (or otherwise), it probably happened.
Not always. Not perfectly. But probably.
That assumption is breaking fast—not because people suddenly became worse, but because authenticity is becoming cheap. Reality can be simulated. Believability can be manufactured. And the more convincing synthetic media gets, the less useful our old instincts become.
We’re drifting into a new default: skepticism.I get it. Deepfakes are improving. AI-generated “documentary” footage is showing up in feeds. Even legitimate content is getting accused of being fake because the vibe feels off.
The result is a kind of epistemic fog: you can't trust what you see, and you can't trust who's telling you what to trust.
But here’s what I think we need to say out loud:
Skepticism can’t be the destination.
If “assume it’s fake” becomes the baseline, we don’t become more rational. We become more tribal. People stop evaluating evidence and start choosing trust anchors: their favorite account, their in-group, their preferred narrative. And once that happens, the internet doesn’t get safer—it gets colder and easier to manipulate.
So we need a replacement primitive. A new rule that doesn’t require blind trust in platforms or constant paranoia from users.
I think that rule is this:
Unverified by default.
Verifiable when it matters.
”Fake vs real” is a dead end
The “fake vs real” framing creates two predictable failures.
- 01It turns every disagreement into a reality war.
If someone doesn’t like what you posted, “fake” becomes a weapon. The accusation is cheap, and proving a negative is impossible. It’s gasoline on the fire.
- 02It assumes we can reliably detect fakes at scale.
Maybe we can for a while—until we can’t. Detection is an arms race, and arms races don’t end in peace. They end in exhaustion.
Even if platforms label AI-generated content perfectly (they won’t), there’s a second problem: context gets stripped. Media gets reposted, screen-recorded, edited, and reuploaded. Metadata disappears. Chain of custody dissolves.
So instead of arguing about whether something is “fake,” we should start with a more honest statement:
Most digital media is unverified.
And once you accept that, a better question appears: What would it take to verify this?
Receipts for Reality
Think about how we treat transactions. You don’t “feel” that a purchase happened. You verify it. You can trace it. There’s a record. We need the media equivalent: receipts for reality.
In the simplest form, a “receipt” is a cryptographic record generated at the moment media is created—a verifiable statement of origin that can travel with the content and be checked later.
Not a vibe. Not a watermark. Not a promise — verifiable proof.
The moment a photo or video is captured, the system can create a provenance record that includes the basics:
- a fingerprint of the media (so you can detect alteration)
- when it was captured
- where it was captured (optional / privacy-controlled)
- who captured it (optional / privacy-controlled)
- how it’s been handled since (edits, transfers, republishing)
Why “raw” won’t save us
Right now there’s a growing belief that “imperfection” is proof. Blurry photos. Shaky video. Bad lighting. Unflattering angles. The raw aesthetic.
But signals get counterfeited. The same tools that can generate perfect faces will generate imperfect ones. The same systems that can produce cinematic footage will produce “accidental” mirror shots, shoe pics, and shaky handheld clips. Rawness will become a style preset.
So if we anchor trust to aesthetics, we’re building on sand. The only durable signal is a verifiable chain of custody.
Offline proof matters more than people realize
Reality doesn’t wait for Wi-Fi. Some of the highest-stakes moments happen when you’re offline, throttled, blocked, or in a place where networks are unreliable. If provenance requires constant connectivity, it fails the situations where it’s most needed.
Receipts from the room (corroboration)
This is where a second concept matters: corroboration. Imagine that when something is captured, nearby devices can optionally provide independent witness receipts—“I was here too, and I observed the same event at roughly the same time and place.”
Not social validation. Not likes. Cryptographic witnesses. One device can lie. Multiple independent witnesses raise the cost of lying.
Verified doesn’t mean “true”—and that’s okay
Verification doesn’t make something morally true. It doesn’t prevent propaganda. It doesn’t remove bias. It doesn’t guarantee full context. What it does is narrower and more powerful:
- OriginIt proves where the content came from.
- IntegrityIt proves the content hasn’t been altered.
- Chain of CustodyIt tracks the content’s history.
- AccountabilityIt links content to a source.
A calmer social contract for media
Here’s the contract I want us to normalize:
Most content is unverified, and that’s okay.
When stakes are low, we live our lives.
When stakes are high, we demand receipts.
A claim with no receipts isn’t automatically false—it’s just unverified. A claim with receipts isn’t automatically good—it’s just accountable.
Receipts are grounding.
Unverified by default.
Verifiable when it matters.
If you’re building in this area—provenance, verification UX, identity credentials, capture-time signing—I’d love to compare notes. We’ll need open standards, interoperable verification tools, and a shared vocabulary that doesn’t collapse into “fake vs real.”