Skip to content

Human vs AI Proof

Fidemark ships two attestation types. They share one resolver and one verification page, but they answer different questions.

Use it when:

  • A creator wants to claim authorship of an article, image, recording, code commit, etc.
  • A publisher wants to ship a “verified human” badge on its content.
  • A platform wants to distinguish original work from AI-generated submissions.

What’s attested: a SHA-256 hash of the content, the creator’s address, a content type label, a creation timestamp, and a proofMethod (the trust layer: see Trust layers).

AI Proof: “a specific model produced this output”

Section titled “AI Proof: “a specific model produced this output””

Use it when:

  • A developer wants to make their AI agent’s outputs traceable for compliance.
  • An enterprise needs an audit trail of which model generated which artifact.
  • A platform wants to disclose AI-generated content while preserving prompt confidentiality.

What’s attested: a SHA-256 hash of the output, the modelId, the provider, a SHA-256 hash of the prompt (proves which prompt without revealing it), and a parameters JSON string.

A single piece of content can carry attestations of both types, and they can reference each other via EAS’s refUID field. This is how trust gets layered onto AI Proof.

An article was written by a human → translated by an AI → reviewed by a human editor.

Three attestations:

Human Proof (original)
↑ refUID
AI Proof (translation)
↑ refUID
Human Proof (review)

The verifier walks the chain and sees the full provenance trail. verifyChain(uid) returns the chain root-to-leaf in one call.

See Guides → Workflow examples for how to read these diagrams and produce the same chain in TypeScript, Python, or Go.

This is the recommended way to express “a verified human published this AI output,” rather than tagging the AI Proof itself with a trust layer.

Human Proof (Layer 4 / pop-verified)
"I'm a unique Orb-verified human, publishing this content."
↑ refUID
AI Proof
"The content was produced by Claude Sonnet 4 from this prompt hash."

A verifier reading the AI Proof:

  1. Checks the AI Proof’s content hash matches what they’re looking at.
  2. Walks refUID to the parent Human Proof.
  3. Sees the parent’s proofMethod = pop-verified and reads the World ID nullifier.
  4. Concludes: “a verified human took responsibility for this AI-generated content.”

This composes more flexibly than tagging the AI Proof with a layer. Mix Layer 1 (ENS) + AI Proof for “publisher with public identity,” Layer 2 (multi-party) + AI Proof for “this AI output was approved by N reviewers,” and so on.

Why AI Proof has no trust-layer field today

Section titled “Why AI Proof has no trust-layer field today”

AI Proof and Human Proof live on different trust axes:

AxisQuestion it answersToday’s spectrum
Identity (Human Proof)Who’s behind the signature?L0 wallet → L1 ENS → L2 multi-party → L4 PoP
Execution (future AI Proof)Was the inference real?TEE-AI, zkML-AI, witness-attested-AI

Adding a proofMethod field on the AI schema today would mean putting an identity layer where an execution layer belongs. It would let users tag an AI Proof “L4: pop-verified”, which sounds like the AI output is human-verified, but actually only the publisher is verified; they could still lie about which model produced what.

The composition pattern (chain via refUID) is unambiguous: a Layer 4 Human Proof references an AI Proof, and the verifier can read both claims separately. No conflation.

The AI Proof axis will eventually grow its own spectrum, parallel to the human one:

  • TEE-AI: the inference happened inside a TDX VM running an allowlisted model image.
  • zkML-AI: a zk-proof attests “this model with these weights produced this output from this prompt.”
  • Witness-attested-AI: an oracle (e.g. an Anthropic-signed receipt) co-attests the output.

Each will ship as a new schema + resolver pair, exactly like the human-side trust layers do. Today’s AI Proof is L0 on this future spectrum: signed by a wallet, no execution-environment guarantee.

Fidemark does not attest:

  • That the content is true, accurate, or high quality.
  • That the content was created exclusively by the claimed source: only that the attester signed off on it.
  • The identity of the human at Layer 0. A wallet-signed Human Proof says “this wallet attested authorship,” nothing more. Higher trust layers raise that bar: see Trust layers.