Verifiability

How Gantral execution decisions can be independently verified without trusting Gantral, the operator, or the UI.

What Verifiability Means (and What It Does Not)

In Gantral, verifiability means that an independent party can reconstruct and evaluate an execution decision using only:

  • a recorded execution artifact, and

  • publicly documented execution semantics.

Verifiability does not imply:

  • legal admissibility in any jurisdiction

  • regulatory approval or certification

  • correctness of human judgment

  • compliance classification or conformity assessment

Those determinations are made by external authorities, not by Gantral.

Gantral’s role is narrower and structural:
to ensure that execution authority leaves behind evidence that can be independently inspected.

The Unit of Verification

The unit of verification in Gantral is the commitment artifact.

A commitment artifact is emitted at execution time, at the moment authority is enforced.

It binds authorization and execution into a single, immutable record.

A commitment artifact includes:

  • execution instance identifier

  • execution state transition

  • policy version reference (evaluation logic reference only, not an authorization decision)

  • authority state hash

  • human decision (if applicable)

  • execution context fingerprint

  • timestamp boundary

Once emitted, the artifact is immutable.

It does not depend on:

  • application logs

  • dashboards or approval UIs

  • mutable databases

  • post-incident narratives

  • operator testimony

Logs may explain.
The artifact is what can be verified.

Replay & Verification Model

Verification in Gantral is performed through deterministic replay.

Given:

  • a commitment artifact

  • public Gantral execution semantics

an independent party can deterministically reconstruct:

  • the execution path

  • the authority state at each transition

  • whether execution was allowed, paused, escalated, or terminated

Replay does not rehydrate:

  • agent memory

  • model reasoning

  • prompts or internal state

Only authority and execution state are replayed.

This ensures that verification is:

  • deterministic

  • reproducible

  • independent of runtime environments

Verification outcomes

Replay and verification may result in one of three outcomes:

  • VALID — authority and execution state are consistent and untampered

  • INVALID — tampering, substitution, or inconsistency detected

  • INCONCLUSIVE — insufficient or incomplete evidence

Gantral is explicit about where proof holds — and where it does not.

What Can Be Verified

Using Gantral artifacts and semantics, an independent verifier can determine:

  • whether execution was paused or allowed

  • whether human authority was required

  • whether a human decision was captured

  • which policy version was evaluated

  • when authority was exercised

  • whether execution state transitions were consistent

These are execution facts, not interpretations.

What Cannot Be Verified

Gantral deliberately does not attempt to verify:

  • correctness of human judgment

  • intent or motivation behind decisions

  • business appropriateness of actions

  • compliance classification of the system

  • ethical or normative conclusions

Verifiability is about evidence, not endorsement.

Failure Modes (Explicit)

Verifiability includes clear failure semantics.

  • Missing artifact → verification is inconclusive

  • Altered artifact → verification is invalid

  • Incomplete context → verification is inconclusive

Failing closed is intentional.
Ambiguity is surfaced, not hidden.

Policy Evaluation and Verifiability

Gantral separates policy evaluation from execution authority by design.

Policy engines are used to:

  • evaluate conditions

  • apply intent and constraints

  • return advisory signals

They never:

  • approve execution

  • pause execution

  • resume execution

  • hold authority

Gantral currently supports policy evaluation using Open Policy Agent (OPA) as a reference implementation.

With OPA:

  • policies are written declaratively

  • evaluations are deterministic and side-effect free

  • policy versions are recorded for replay accuracy

OPA provides signals such as:

  • allow execution

  • require human approval

  • deny execution

Gantral interprets and enforces these signals as execution state transitions.

Policy engines advise.
Gantral enforces.

This separation is critical for verifiability.

Policy engines such as OPA evaluate conditions and return advisory signals. Gantral alone enforces execution by advancing or blocking execution state.

Relationship to Standards

Gantral is not a compliance framework and does not replace legal, risk, or governance processes.

It provides execution-time evidence and control primitives that organizations use to meet regulatory and standards-based requirements where human accountability, traceability, and auditability are mandatory.

NIST AI Risk Management Framework (AI RMF)

The NIST AI RMF emphasizes governance, accountability, traceability, and post-deployment oversight.

Gantral supports these outcomes by:

  • enforcing human-in-the-loop as an execution state

  • producing immutable execution records

  • enabling deterministic replay of authorization decisions

  • separating policy advice from execution authority

Gantral supplies execution-time evidence required by AI RMF governance functions.
It does not define risk tolerances or policy content.

EU Artificial Intelligence Act

The EU AI Act introduces obligations around:

  • human oversight

  • traceability

  • record retention

  • post-market monitoring for high-risk AI systems

Gantral supports high-risk system obligations by:

  • enforcing human oversight at execution time

  • capturing decision context and authority at the moment of action

  • producing replayable records suitable for regulatory inspection

System classification, risk categorization, and conformity assessment remain outside Gantral’s scope.

ISO / IEC 42001

ISO/IEC 42001 focuses on AI management systems, including:

  • defined responsibilities

  • operational controls

  • auditability of AI-assisted processes

Gantral functions as an operational control layer by:

  • making authority explicit and enforceable

  • standardizing execution records across teams

  • enabling independent audit and review

Gantral does not replace management-system processes.
It provides the execution substrate those systems rely on.

Why Verifiability Matters

Verifiability shifts governance from explanation to inspection.

It allows organizations to:

  • demonstrate execution control without relying on trust in tools or teams

  • survive long-horizon audits and disputes

  • decouple accountability from UI, vendors, and runtime environments

Governance becomes a property of execution — not a reconstruction exercise.