Verifiability
How Gantral execution decisions can be independently verified without trusting Gantral, the operator, or the UI.
What Verifiability Means (and What It Does Not)
In Gantral, verifiability means that an independent party can reconstruct and evaluate an execution decision using only:
a recorded execution artifact, and
publicly documented execution semantics.
Verifiability does not imply:
legal admissibility in any jurisdiction
regulatory approval or certification
correctness of human judgment
compliance classification or conformity assessment
Those determinations are made by external legal or regulatory authorities, not by Gantral.
Gantral’s role is narrower and structural:
to ensure that execution authority leaves behind evidence that can be independently inspected.
The Unit of Verification
The unit of verification in Gantral is the commitment artifact.
A commitment artifact is emitted at execution time, at the moment authority is enforced.
It binds authorization and execution into a single, immutable record.
A commitment artifact includes:
Execution Binding
execution instance identifier
workflow version id
policy version id
Authority Binding
authority state
human actor identity
timestamp
Integrity Binding
context snapshot hash
previous artifact hash
Once emitted, the artifact is immutable. Artifacts are cryptographically chained such that modification of any artifact invalidates all subsequent artifacts.
It does not depend on:
application logs
dashboards or approval UIs
mutable databases
post-incident narratives
operator testimony
Logs may explain.
The artifact is what can be verified.
Replay & Verification Model
Verification in Gantral is performed through deterministic replay.
Given:
a commitment artifact
public Gantral execution semantics
an independent party can deterministically reconstruct:
the execution path
the authority state at each transition
whether execution was allowed, paused, escalated, or terminated
Replay reconstructs the authority-state projection of the execution sequence. Non-authority execution details (agent memory, prompts, model reasoning) are intentionally excluded from replay scope.
Replay does not rehydrate:
agent memory
model reasoning
prompts or internal state
Only authority and execution state are replayed.
This ensures that verification is:
deterministic
reproducible
independent of runtime environments
Verification outcomes
Replay and verification may result in one of three outcomes:
VALID — authority and execution state are consistent and untampered
INVALID — tampering, substitution, or inconsistency detected
INCONCLUSIVE — insufficient or incomplete evidence
Gantral is explicit about where proof holds — and where it does not.
What Can Be Verified
Using Gantral artifacts and semantics, an independent verifier can determine:
whether execution was paused or allowed
whether human authority was required
whether a human decision was captured
whether workflow and policy versions remained consistent during replay
when authority was exercised
whether execution state transitions were consistent
These are execution facts, not interpretations.
What Cannot Be Verified
Gantral deliberately does not attempt to verify:
correctness of human judgment
intent or motivation behind decisions
business appropriateness of actions
compliance classification of the system
ethical or normative conclusions
Verifiability is about evidence, not endorsement.
Failure Modes (Explicit)
Verifiability includes clear failure semantics.
Missing artifact → verification is inconclusive
Altered artifact → verification is invalid
Incomplete context → verification is inconclusive
Failing closed is intentional.
Ambiguity is surfaced, not hidden.
Policy Evaluation and Verifiability
Gantral separates policy evaluation from execution authority by design.
Policy engines are used to:
evaluate conditions
apply intent and constraints
return advisory signals
They never:
approve execution
pause execution
resume execution
hold authority
Gantral currently supports policy evaluation using Open Policy Agent (OPA) as a reference implementation.
With OPA:
policies are written declaratively
evaluations are deterministic and side-effect free
policy versions are recorded for replay accuracy
OPA provides signals such as:
allow execution
require human approval
deny execution
Gantral interprets and enforces these signals as execution state transitions.
Policy engines advise.
Gantral enforces.
Policy bundles are versioned and their identifiers are recorded within commitment artifacts to ensure replay consistency.
This separation is critical for verifiability.
Policy engines such as OPA evaluate conditions and return advisory signals. Gantral alone enforces execution by advancing or blocking execution state.

Relationship to Standards
Gantral is not a compliance framework and does not replace legal, risk, or governance processes.
It provides execution-time evidence and control primitives that organizations use to meet regulatory and standards-based requirements where human accountability, traceability, and auditability are mandatory.
NIST AI RMF
• enforcing human-in-the-loop as execution state
• immutable execution records
• deterministic replay
• policy-authority separation
EU AI Act
• human oversight at execution time
• replayable records
• authority context binding
ISO/IEC 42001
• operational control layer
• standardized execution records
• independent audit support
Why Verifiability Matters
Verifiability ensures that execution authority can be independently inspected without relying on trust in tools, operators, or runtime environments.
It allows organizations to:
demonstrate execution control without relying on trust in tools or teams
survive long-horizon audits and disputes
decouple accountability from UI, vendors, and runtime environments
Governance becomes a property of execution — not a reconstruction exercise.
The formal specification of these replay and artifact semantics is documented in:
Gantral: Implementation of an Admissible AI Execution Control Plane Paper
