Skip to content

Trust layers

A “trust layer” is shorthand for “what a Fidemark attestation actually proves about the thing it’s signed for.” There are two orthogonal spectrums in Fidemark, one per attestation type, because the questions they answer are different.

The classic trust spectrum. The question being sharpened: who’s behind this signature?

LayerproofMethodAssuranceStatus
0wallet-signedLow: proves a wallet holder attested.Available
1ens-verifiedMedium: wallet linked to a public ENS name.Available
2multi-partyMedium-high: multiple independent attesters co-sign atomically. See Multi-party.Available
3tee-attestedHigh: content creation verified inside a Trusted Execution Environment.Planned
4pop-verifiedHigh: wallet linked to a proof-of-personhood credential (Worldcoin). See PoP-verified.Available

This axis lives in the Human Proof schema’s proofMethod field. Every Human Proof declares its layer; verifiers choose their assurance threshold.

The question being sharpened: was the inference real? Today this axis has only one point (the default), but new layers will ship as new schema + resolver pairs over time.

LayerMechanismAssuranceStatus
0wallet-signed-aiLow: a wallet claims model X produced this output. The signer’s identity isn’t verified by the production environment.Available
1witness-attested-aiMedium: an oracle (vendor-signed receipt or zkTLS notary) co-attests the output. Depends on vendor opt-in or notary integration; no frontier vendor publishes a signing key today.Planned
2tee-attested-aiHigh: inference happened inside a TDX VM running an allowlisted model image.Planned
3zkml-aiHigh: zk-proof attests “this model with these weights produced this output from this prompt.”Planned

AI Proof today doesn’t carry a proofMethod field. To express trust in the publisher of an AI Proof, use the composition pattern instead of overloading the schema.

A single mechanism (TEE quote verification on chain) appears on both trust axes, answering different questions per side. They’re sketched together here because the cryptographic substrate, verifier contract, and infrastructure burden are shared; only the schema fields and the thing being attested differ.

Shared mechanism: an Intel TDX (or AMD SEV-SNP, or AWS Nitro) quote produced by a TEE running an allowlisted code image. The quote is a hardware-signed blob proving “this code measurement ran on this CPU and produced this output.” The Fidemark resolver re-runs the quote verification on chain via a verifier contract (e.g. Automata DCAP on Base).

Shared trust assumption: Intel’s hardware root of trust + the producer-image allowlist. Strong, but you trust silicon vendors.

Identity axis: tee-attested (Layer 3, Human Proof)

Section titled “Identity axis: tee-attested (Layer 3, Human Proof)”

Question it answers: “was this content authored by code running inside a verified TEE?”

Use case: a Human Proof published by a workflow (newsroom CMS, legal-document generator, content-management pipeline) where the code that produced the content runs inside a TEE. The MRTD measurement identifies the workflow’s binary. A verifier knows the content came from that specific code path, not a tampered fork.

Planned schema (Human-side TEE):

schema fields:
bytes32 contentHash // SHA-256 of authored content
string contentType
address creator // resolver enforces == attester
uint64 createdAt
bytes32 mrtd // measurement of the producer workflow image
bytes32 reportData // commits to (contentHash, creator)
bytes quote // ~5 KB TDX quote
string proofMethod // "tee-attested" / "tee-attested-tdx-v4"

Execution axis: tee-attested-ai (TEE-AI, AI Proof)

Section titled “Execution axis: tee-attested-ai (TEE-AI, AI Proof)”

Question it answers: “did the inference happen inside a verified TEE running an allowlisted model image?”

Use case: an AI Proof where the model inference itself runs inside a TEE. The MRTD identifies the model+runtime image (e.g. a TDX VM running Llama-3.1-70B with vLLM). A verifier knows the output really came from that model, not a different one labeled the same way.

Planned schema (AI-side TEE-AI):

schema fields:
bytes32 contentHash // SHA-256 of model output
string modelId
bytes32 mrtd // measurement of the model+runtime image
bytes32 reportData // commits to (contentHash, attester)
bytes quote // ~5 KB TDX quote
string proofMethod // "tee-attested-ai-tdx-v4"

Any code that fits in a TDX VM. Already viable for both content workflows and inference-heavy AI workloads. The most pragmatic short-term answer for closed proprietary models (Claude, GPT-4) since the model owner can run their own producer with their own MRTD allowlist. No SDK surface ships today; both axes will get dedicated entry points when the resolver + schema pair lands.

Question it answers: “does the inference’s math actually reproduce this output?”

Mechanism: a zk-SNARK over the model’s forward pass. Public inputs are the hashes of (model weights, prompt, output). Verifying the proof on chain proves the inference was honest: someone really did run model M with weights W on prompt P and got output O. No trust in vendors, hardware, or oracles, only in the SNARK’s soundness.

On-chain shape (planned):

schema fields:
bytes32 contentHash // hash of output
bytes32 promptHash
bytes32 modelCommitment // weights digest (Merkle root of tensors)
string modelId
uint256[N] proof // Groth16/PLONK/STARK proof
uint64 createdAt

Trust assumption: the cryptography. Strongest on the AI execution axis when it’s available.

Why it’s hard today:

  1. Proof generation is expensive. State of the art (EZKL, Modulus Labs, Risc Zero) handles small models (millions of parameters) in minutes-to-hours on GPU clusters. Frontier-scale LLMs (10B+ params) are not feasible to prove in 2026.
  2. Model identity needs a commitment scheme. “Same model” means bit-identical weights. Open-weights models (Llama, Mistral) work because anyone can recompute the commitment. Closed models (Claude, GPT-4) require the owner to publish a weight digest, which they typically won’t.
  3. On-chain proof verification is the easy part: ~250 k gas for Groth16, ~500 k for PLONK. Cheap on Base; not the blocker.

Realistic scope (2026): open-weights small/medium models. Frontier closed-source LLMs are years away.

Architecture, when it ships: new schema (fidemark.zkml-ai.v1) + dedicated FidemarkZkmlAIResolver that calls a zkML verifier contract (provider-specific: Risc Zero / EZKL / Polygon zk verifier). Same pattern as Layer 4 PoP wrapping IWorldID. SDK gets a dedicated entry point shaped to the chosen provider, paired with the equivalent for the TEE execution-axis layer.

Question it answers: “did a trusted third party see this inference happen?”

Mechanism: an oracle co-signs the AI output. A model vendor (or a neutral broker sitting in front of the API) signs (modelId, promptHash, outputHash, timestamp) at inference time. The Fidemark resolver verifies the signature on chain against an allowlisted public key.

Trust assumption: the oracle’s honesty + key custody. Weaker than TEE (no hardware root). Much weaker than zkML (no math). But cheaper to produce and verify than either.

On-chain shape (planned):

schema fields:
bytes32 contentHash
bytes32 promptHash
string modelId
address witness // allowlisted oracle address
uint64 witnessedAt
bytes witnessSignature // ECDSA over (contentHash, promptHash, modelId, witnessedAt)

The resolver maintains an allowlistedWitnesses mapping (owner-managed, same pattern as the proofMethod allowlist on the Human resolver). Adding a new oracle key is an owner-only call; no new resolver per vendor.

The honest part: none of this exists from frontier vendors today. No major model provider (Anthropic, OpenAI, Google) currently publishes a signing key for inference receipts. The closest adjacent infrastructure:

  • TLS to the vendor’s API proves “the vendor’s endpoint returned this blob to me” but is not transferable. A third party cannot reverify the session.
  • TLS notary protocols (TLSNotary, Reclaim, zkTLS) convert a TLS session into a transferable proof. The witness is the notary network, not the vendor. Production-grade but not widely deployed against frontier APIs.

So witness-attested-ai becomes shippable through one of two paths:

  1. Vendor opt-in. A model provider publishes a signing key and signs receipts at inference time. Cheapest path, but requires vendor cooperation that doesn’t exist in 2026.
  2. zkTLS notary integration. A user runs the API call through a notary; the resulting proof attests the API response without requiring vendor cooperation. The on-chain schema changes to verify the notary’s proof system instead of a plain ECDSA signature.

Realistic scope (2026): medium-assurance fallback for closed frontier models if path 1 or 2 lands. Useful as the only layer above L0 that closed frontier vendors could ship without infra rebuilds, but contingent on cooperation that doesn’t exist yet.

Why it’s medium-assurance, not high: a receipt proves the witness key signed something. It doesn’t prove the inference actually ran on the claimed model, that the output wasn’t cherry-picked from N samples, or that the witness key isn’t compromised. You’re trusting a brand and its key custody.

Three planned layers sit on the AI execution axis (L1 through L3). They differ on trust assumption, cost, and which models they work for.

L1 witness-attested-aiL2 tee-attested-aiL3 zkml-ai
Trust assumptionOracle’s honesty + key custody (or zkTLS notary)Hardware root of trust (Intel/AMD/AWS) + image allowlistCryptographic soundness only
Proof generation costTrivial (one signature) or moderate (zkTLS)Low (quote in milliseconds)High (minutes-to-hours on GPU clusters)
Proof verification cost~30 k gas (ECDSA) or higher (zkTLS verifier)~500 k - 2 M gas~250 k - 1 M gas
Works for closed frontier modelsYes, if vendor signs receipts OR via zkTLSYes, if vendor runs the producerNo (needs weight commitments)
Works for frontier LLMsYesYes (10B+ params)Not in 2026
Single point of failureOracle key (or notary network)Hardware vendor + image managementNone
Shippable todayPending vendor opt-in or zkTLS integrationPending resolver pairPending circuits

The three are complementary, not competing:

  • L1 witness-attested-ai: medium-assurance fallback when neither TEE nor zkML is available; the only realistic path for frontier closed-source models without vendor TEE deployment.
  • L2 tee-attested-ai: pragmatic short-term answer for production-scale AI provenance when the model owner controls the runtime.
  • L3 zkml-ai: cryptographically purer long-term answer that becomes viable as proof-generation infrastructure scales; works only for open-weights models in 2026.

The two axes compose via referenced attestation chains (refUID). Express “a verified human published this AI output” with two attestations linked together:

Human Proof (proofMethod = pop-verified) Identity axis: L4
↑ refUID
AI Proof Execution axis: L0 (today)

A verifier reading the AI Proof walks refUID to the parent and sees both claims separately:

  • “An Orb-verified human took responsibility for publishing this.” (from the Human L4)
  • “The content was produced by Claude Sonnet 4 from prompt hash 0x…” (from the AI Proof)

This stays unambiguous as new AI-side layers ship. Future composition: Human L4 ← AI Proof (TEE-attested-AI) reads as “verified human published, inside a verified execution environment.”

See Human vs AI Proof for worked examples, or Guides → Workflow examples for TypeScript / Python / Go code that produces the same chains.

  • No protocol-level gating. Anyone can issue Layer 0 on either axis. Permissionless by design.
  • Verifiers choose. A casual reader might accept Layer 0. An EU AI Act auditor might require identity-axis Layer 2+ AND execution-axis Layer 3+ on the same chain.
  • Composable. Each layer is its own attestation; chains compose via refUID. Old attestations stay valid forever; new layers extend the spectrum without breaking what shipped before.

The Fidemark Resolver maintains an allowlist of accepted proofMethod values for the Human schema. Simple layer additions on the same axis (e.g. ens-verified) are an owner-only call.

When a new layer requires different on-chain validation logic (multi-party signature recovery, World ID Groth16 verification, TDX quote verification), it ships as a new schema paired with a new resolver. Old schemas keep their guarantees; old attestations stay verifiable forever.

  1. Read the attestation’s proofMethod (or schema, for AI Proof and execution-axis layers).
  2. Decide whether that layer’s assurance meets your use case.
  3. Walk refUID chains to gather identity-axis claims about an AI Proof, or vice versa.
  4. Treat each attestation as evidence at the level its layer claims, never as a stronger property by inference.