Skip to content

Workflow examples

Fidemark’s primitives are intentionally narrow: each attestation says one thing about one piece of content. Rich provenance comes from chaining those primitives via EAS’s native refUID field. This guide shows how to read the chain diagrams that appear in the rest of the docs and how to produce the same shape from any of the three SDKs.

Throughout the docs you will see stacked attestation diagrams like this:

Human Proof (proofMethod = pop-verified) <-- leaf (newest)
↑ refUID
AI Proof <-- parent

How to read it:

  • Top to bottom = leaf to root. The newest attestation is at the top; each ↑ refUID arrow points at the parent it references.
  • Each row is one EAS attestation with its own UID, attester, and decoded fields. They are completely independent on chain.
  • The arrow is the only link. EAS does not enforce that the parent exists or has any relationship to the child; the chain is a hint that verifiers can choose to follow with verifyChain.
  • The annotations on the right (e.g. proofMethod = pop-verified) call out which trust layer each attestation occupies. Layers are described in Trust layers.

A verifier reading the leaf walks refUID upward and synthesises the combined claim:

“An Orb-verified unique human took responsibility for publishing content 0xabc..., and that content was produced by Claude Sonnet 4 from prompt hash 0xdef....”

Neither attestation alone says that. The composition does.

Worked example: AI output published by a verified human

Section titled “Worked example: AI output published by a verified human”

This is the canonical “AI publication” workflow. The author runs the AI generation, signs an AI Proof for the output, then tops it with a Human Proof at the strongest identity layer they have available (here, World ID).

Step 1 records the model + prompt; Step 2 binds the publication to a unique human and references step 1.

const ai = await fidemark.attestAI({
content: aiResponse,
modelId: "claude-sonnet-4-6",
provider: "anthropic",
prompt: userPrompt,
parameters: { temperature: 0.7 },
});
console.log(ai.uid); // -> 0xAAA...

Step 2: Human Proof at Layer 4, referencing the AI Proof

Section titled “Step 2: Human Proof at Layer 4, referencing the AI Proof”
const human = await fidemark.attestHumanWithPoP({
content: aiResponse, // same bytes as the AI Proof
contentType: "text/article",
worldIdProof, // from IDKit
refUID: ai.uid, // <-- chain link
});
console.log(human.uid); // -> 0xBBB...
Human Proof (proofMethod = pop-verified) UID = 0xBBB... <-- leaf
↑ refUID
AI Proof (claude-sonnet-4-6, anthropic) UID = 0xAAA... <-- parent

A verifier walks the chain by calling verifyChain(0xBBB...) and gets back [AI Proof, Human Proof] in root-to-leaf order.

A human authors an article, then an AI translates it. The translation is its own AI Proof, referencing the original.

AI Proof (modelId = gpt-4o-translate) UID = 0xT... <-- leaf
↑ refUID
Human Proof (proofMethod = ens-verified) UID = 0xH... <-- parent

Useful when consumers want to know which translation of which original they are reading. The translation’s contentHash differs from the original’s; the chain is what proves they are the same work.

B. Multi-party endorsement of a Human Proof

Section titled “B. Multi-party endorsement of a Human Proof”

A creator publishes solo, then a newsroom co-signs with a multi-party attestation referencing the original.

Multi-party Proof (3 of 3 editors signed) UID = 0xM... <-- leaf
↑ refUID
Human Proof (proofMethod = ens-verified) UID = 0xH... <-- parent

The leaf says “the editorial board endorses this.” The parent says “the author published it.” Verifiers can require both before showing a “verified by The Newsroom” badge.

C. Off-chain draft promoted to on-chain final

Section titled “C. Off-chain draft promoted to on-chain final”

The author signs an off-chain envelope with signWithDelegated: true while drafting, then a publishing service brings it on-chain when the article goes live, optionally adding a Human Proof at the moment of publication.

Human Proof at publication time UID = 0xP... <-- leaf
↑ refUID
AI Proof promoted from off-chain envelope UID = 0xO... <-- parent (different from envelope.uid)

The off-chain envelope remains independently verifiable forever; the on-chain promotion creates a new UID that subsequent attestations can reference.

Use verifyChain(uid) to walk the refUID graph. The walker stops at depth 32, returns whatever it gathered when it hits a non-Fidemark schema or a missing UID, and is cycle-safe.

const chain = await fidemark.verifyChain(leafUID);
// chain[0] is the root, chain.at(-1) is the leaf
for (const att of chain) {
console.log(att.type, att.uid, att.attester);
}
  • Chains are optional. A standalone Human Proof or AI Proof is a complete attestation; chains only add when the composition is meaningful.
  • Same contentHash is fine, different contentHash is fine. EAS doesn’t enforce either. Use the same hash when both attestations are about the same bytes (Human + AI on the same article); use a different hash when each layer is about different bytes (translation, transformation, derivative).
  • Order does not imply authority. The leaf is just the latest entry in the chain. A verifier may weight any link more or less heavily based on its trust layer (proofMethod).
  • Anyone can attach to anyone’s UID. EAS does not gate refUID. If your verifier UI cares whether the parent’s attester is the same as the child’s, check it explicitly when displaying the chain.