Gantral

An Admissible Execution Control Plane for AI

Infrastructure that binds human authority to execution — producing replayable, third-party-verifiable evidence.

Gantral is open infrastructure for governing how AI execution is allowed to proceed in real workflows.
It does not build agents. It does not optimize models.
It enforces execution-time authority where accountability cannot be assumed.

The Problem Is Not Intelligence

It’s Execution Control, Enforced as Infrastructure

Organizations already use AI across the SDLC, operations, finance, and internal workflows.
What breaks at scale is not capability — it is control.

In practice:

  • AI runs across many tools and teams

  • Approvals are informal and tool-specific

  • Human review is assumed, not enforced

  • Execution records are reconstructed after incidents

Governance depends on discipline rather than infrastructure.
That does not survive scale, audits, or regulatory scrutiny.

What Gantral is

Gantral is an AI Execution Control Plane.

It operates:

  • above AI agent frameworks

  • below enterprise processes and governance systems

Gantral focuses on execution semantics, not intelligence.

Gantral provides mechanisms to:

  • control execution state (pause, resume, override)

  • model Human-in-the-Loop as an execution state

  • record authority, decisions, and execution context

  • produce deterministic, replayable execution records

  • apply policy decisions independently of agent code

  • bind each execution instance to an owning team and policy context

Execution is governed structurally — not by convention.

Execution Admissibility

Gantral is designed for environments where execution decisions must be proven, not explained, after the fact.

An execution is admissible when a third party can independently verify:

  • who had authority

  • what was authorized

  • under which conditions

  • at the exact moment execution occurred

This verification must not rely on:

  • operator testimony

  • mutable logs

  • dashboards

  • access to Gantral infrastructure

Gantral treats admissibility as a first-class execution property, not a reporting feature.

The Commitment Artifact

At execution time, Gantral emits a commitment artifact.

This artifact is the authoritative object that binds authorization and execution into a single, immutable record.

A commitment artifact includes:

  • execution instance identifier

  • execution state transition

  • policy version reference (evaluation logic reference only, not an authorization decision)

  • authority state hash

  • human decision (if applicable)

  • execution context fingerprint

  • timestamp boundary

Once emitted, the artifact is immutable.

It does not depend on:

  • application logs

  • approval UIs

  • workflow dashboards

  • post-incident narratives

This artifact is the unit of proof.

Independent Verification & Replay

Gantral artifacts are designed to be verified independently.

Learn how independent verification works → Verifiability

Given only:

  • a commitment artifact

  • public Gantral execution semantics

an external party can deterministically replay:

  • the execution path

  • the authority state

  • the conditions under which execution was allowed, paused, or terminated

Verification does not require:

  • access to Gantral services

  • access to internal databases

  • operator credentials

Verification outcomes may be:

  • valid (authorization proven)

  • invalid (tampering or substitution detected)

  • inconclusive (insufficient evidence)

Gantral is explicit about where proof holds — and where it does not.

Verification relies only on the commitment artifact and public execution semantics. It does not require access to Gantral systems, operator credentials, or internal databases.

Detailed verification semantics and failure modes are documented in the Verifiability section.

Policy Evaluation (Advisory, Not Authoritative)

Gantral separates policy evaluation from execution authority by design.

Policy engines are used to evaluate conditions and return advisory signals — they never approve, pause, or resume execution themselves.

Gantral currently supports policy evaluation using Open Policy Agent (OPA) as a reference implementation.

With OPA:

  • policies are written declaratively

  • evaluations are deterministic and side-effect free

  • policy versions are recorded for replay accuracy

OPA provides signals such as:

  • allow execution

  • require human approval

  • deny execution

Gantral interprets and enforces these signals as execution state transitions.

Policy engines advise.
Gantral enforces.

This separation ensures:

  • no self-approval by agents

  • no policy logic embedded in workflows

  • no ambiguity during audit or replay

Policy engines such as OPA evaluate conditions and return advisory signals. Gantral alone enforces execution by advancing or blocking execution state.

Authority Is Not Intelligence

Gantral deliberately separates responsibilities.

Gantral:

  • owns execution state

  • enforces authority transitions

  • captures human decisions

  • guarantees deterministic replay

Agent frameworks:

  • own reasoning, planning, and memory

  • execute tools and actions

  • remain interchangeable

Policy engines (e.g., OPA):

  • evaluate conditions

  • return advisory signals

  • never hold execution authority

Authority decisions are enforced as execution state transitions, not recommendations.

Regulatory & Standards Alignment

Gantral is not a compliance framework and does not replace legal, risk, or governance processes.

It provides execution-time evidence and control primitives that organizations use to meet regulatory and standards-based requirements where human accountability, traceability, and auditability are mandatory.

NIST AI Risk Management Framework (AI RMF)

The NIST AI RMF emphasizes governance, accountability, traceability, and post-deployment oversight.

Gantral supports these outcomes by:

  • enforcing human-in-the-loop as an execution state

  • producing immutable execution records

  • enabling deterministic replay of authorization decisions

  • separating policy advice from execution authority

Gantral supplies execution-time evidence required by AI RMF governance functions.
It does not define risk tolerances or policy content.

EU Artificial Intelligence Act

The EU AI Act introduces obligations around:

  • human oversight

  • traceability

  • record retention

  • post-market monitoring for high-risk AI systems

Gantral supports high-risk system obligations by:

  • enforcing human oversight at execution time

  • capturing decision context and authority at the moment of action

  • producing replayable records suitable for regulatory inspection

System classification, risk categorization, and conformity assessment remain outside Gantral’s scope.

ISO / IEC 42001

ISO/IEC 42001 focuses on AI management systems, including:

  • defined responsibilities

  • operational controls

  • auditability of AI-assisted processes

Gantral functions as an operational control layer by:

  • making authority explicit and enforceable

  • standardizing execution records across teams

  • enabling independent audit and review

Gantral does not replace management-system processes.
It provides the execution substrate those systems rely on.

Who Gantral Is For

Platform & Infrastructure Teams
How do we enforce approvals without embedding policy logic into every agent?

Security, Risk & Compliance
Can we prove authority was active at execution time — years later?

Auditors & Regulators
Is authorization inseparable from execution, or reconstructed after the fact?

OSS Contributors
Is the execution core inspectable, deterministic, and governance-neutral?
Can the execution semantics be audited independently of the UI?

Gantral is designed for teams that operate where trust must be earned through proof.

Open Core, Deliberately

Gantral’s execution core is open source under the Apache 2.0 license.

This allows:

  • inspection of execution semantics

  • independent security and compliance review

  • long-term trust in regulated environments

Trust-critical execution logic remains open.
Managed experience and enterprise tooling may be commercial.

Engage as a Design Partner

Gantral adoption is intentionally deliberate.

We work with teams operating in regulated or high-stakes environments to validate execution admissibility under real constraints — not hypothetical demos.

Engage as a Design Partner
Review Execution Semantics
Verifiability & Independent Replay