Platform

One control plane for every agent action.

Aegis is a four-layer platform that supervises an agent’s runtime, the web it reads, the memory it writes, and the adversarial baseline it must clear. Each layer ships independently; together, they emit one evidence stream.

Architecture

Out-of-process, by design. The agent can’t talk Aegis out of doing its job.

Aegis runs as a privileged supervisor next to the agent process. Every system call, network request, memory access, and tool spawn traverses the Aegis hooks before it’s admitted. The agent itself never sees the policy engine.

  1. 01

    Observe

    Agent traffic — tool calls, web reads, memory writes, model prompts, human messages — is mirrored to a tamper-evident log.

    ARTIFACT  log/agent.aegis-evt

  2. 02

    Attest

    Each event is annotated with provenance: who authored the tool, what the URL classifies as, whether memory is fresh, whether the model is signed.

    ARTIFACT  evt.attestation

  3. 03

    Verify

    Aegis classifiers and policy engine evaluate the event against your policy & the public threat catalog in <2ms before the agent acts on it.

    ARTIFACT  policy.decision

  4. 04

    Contain

    On signal: snapshot agent state, quarantine credentials, rewind memory to last clean checkpoint, surface a deterministic incident report.

    ARTIFACT  incident.bundle

Evidence-first

Every decision Aegis makes is replayable.

If a finding can’t be reproduced by an auditor, it isn’t a finding. Aegis emits structured evidence bundles — input hashes, attested provenance, policy version, classifier scores — for every supervised event.

  • Deterministic decisions: same inputs, same verdict, every replay.
  • Append-only Merkle log per agent — tampering is detectable.
  • SBOM-style export for compliance: SOC2, ISO 42001, EU AI Act.
  • Drag-and-drop bundle into your existing SIEM / lake.
{
  "evt": "tool.call",
  "agent": "claude-code/4.7.0",
  "ts": "2026-05-04T03:11:42.184Z",
  "tool": {
    "name": "browser.navigate",
    "args_hash": "sha256:8f1...c0a",
    "attestation": "attest:domain:medium-risk"
  },
  "policy": {
    "version": "v0.1.7",
    "decision": "allow_with_redaction",
    "rules_fired": ["web.untrusted", "memory.no_persist"]
  },
  "classifiers": {
    "indirect_injection": 0.07,
    "exfiltration_intent": 0.02,
    "social_engineering": 0.04
  },
  "evidence_bundle": "evt://01HKZ5...4KQ.aegis"
}

Integrations

We meet your stack at the boundary.

Aegis hooks attach at the OS process and network boundary — no framework rewrite, no privileged code in your agent.

Agent runtimes

  • OpenAI Agents SDK
  • Anthropic Claude Code
  • Codex CLI
  • LangChain / LangGraph
  • CrewAI
  • AutoGen
  • LlamaIndex
  • Inkeep

Tool surfaces

  • Model Context Protocol
  • OpenAPI / REST
  • GraphQL
  • Browser-use / Computer-use
  • Shell tool calls
  • Custom function tools

Memory backends

  • Postgres + pgvector
  • LanceDB
  • Mem0
  • Weaviate
  • Pinecone
  • Filesystem JSONL
  • SQLite
  • Redis vector

Identity & policy

  • OIDC / SSO
  • Open Policy Agent
  • Cedar
  • AWS IAM
  • GCP Workload Identity
  • Vault

Common questions

Is Aegis another guardrails wrapper around the model?

No. Guardrails sit inside the agent loop and can be argued out of by the agent itself. Aegis sits outside, between the agent process and the world. Every read and write passes through it. The agent doesn’t get to choose whether to comply.

What latency does this add per agent step?

Aegis verification runs out-of-process and in parallel with model inference, with cached attestations. Expected <2ms p95 for the verify path on a warm cache. We instrument every release; see SecureBench for current numbers.

What does ‘verifiable’ mean here, exactly?

Every Aegis decision emits a structured artifact — input hash, attested provenance, policy version, classifier scores. Audits replay the decision against the same artifacts and must reach the same conclusion. No screenshot of a dashboard counts.

Will Aegis work with frameworks I’ve already shipped?

That’s the design constraint. Aegis attaches at the OS process and network boundary — your existing agent code doesn’t need to change. We’re prioritizing OpenAI Agents SDK, Anthropic Claude Code, LangGraph, and MCP for the first preview.

Next move

Pick the layer that hurts most. Compose the rest.

We’re onboarding twelve early-access partners in 2026. One threat surface, verifiable evidence, in a week.