The AI Execution Control Plane
A position paper on execution-time governance for AI-assisted systems
AI governance today focuses on models, policies, and reviews.
In practice, failures happen at execution time — when actions are allowed to run, paused, approved, overridden, or audited.
This paper defines the AI Execution Control Plane as a missing infrastructure layer that formalizes execution authority, human accountability, and replayable audit across AI-assisted workflows.
Executive Summary
A concise overview of the problem, principles, and reference architecture.
Designed for leaders, architects, and decision-makers.
Full Whitepaper
A detailed, vendor-neutral position paper defining execution authority, determinism, and auditability in AI systems.
Admissible Execution: Invariants for AI Execution Authority
A normative position paper defining the minimum admissibility bar for execution-time authority in AI-assisted systems. This paper specifies the non-negotiable invariants required for execution decisions to remain defensible under audit, incident review, and adversarial scrutiny. It focuses on authority, non-repudiation, replayability, and failure modes—independent of implementation or tooling. Designed for auditors, risk leaders, architects, and standards-aligned governance teams.