The audit layer for AI agents.
Every action your agents take — captured, hash-chained, and mapped to the frameworks your auditors actually ask about. EU AI Act. DORA. ISO 42001. SOC 2.
Agents are running in production. Auditors are already asking.
The EU AI Act, DORA, and Colorado's AI Act all require logs that can reconstruct any high-risk AI decision. Langfuse traces won't pass an audit.
Every MCP tool call, every LLM invocation, every autonomous action — and nobody knows what they did last Tuesday.
"Check CloudWatch" is not a compliance control. "We'll export from Langfuse" is not evidence.
Intercept. Record. Prove.
Drop in our SDK or point your MCP gateway at Kaldros. Every tool call, every model response flows through us.
Each event is canonicalized, hashed, and linked to the previous event's hash. Timestamps are signed every fifteen minutes. The log is append-only and tamper-evident.
Export evidence packs for any framework in one click. Signed PDFs, machine-readable JSON, and a verifiable chain your auditor can re-check offline.
Mapped to the frameworks you're already being asked about.
Built for the way agents actually run.
Point any agent runtime at Kaldros — MCP gateway, language SDK, or raw HTTP — and start writing to the chain. No framework-lock-in, no vendor tax.
Your data stays yours.
Pick a region per workspace. Data never leaves it. Cross-region is an explicit, audited action — not a quiet default.
AWS KMS, GCP KMS, Azure Key Vault. We never hold the keys. Revoke us with one API call.
SOC 2 Type 2 and ISO 27001 in progress. Memoranda and sub-processor list available on request under NDA.