Private Beta · 10 spots · Open source · MIT license

The credential layer
for AI agents

Your agents run with zero API keys. Identark's gateway fetches credentials per-request from an encrypted vault — so a compromised agent can't leak your secrets.

agent.py BEFORE
agent.py AFTER
Built in public
Published on PyPI MIT licensed SDK Full type coverage CI passing UK/EU data residency

Zero secrets in three steps

Switch from direct API calls to credential-isolated execution in under 5 minutes.

01

Register your credentials

Store your LLM API keys once in Identark's encrypted vault via CLI or dashboard. They never leave the control plane again.

02

Swap two lines of code

Replace your OpenAI or Anthropic client with ControlPlaneGateway(). Your agent logic stays identical.

03

Deploy with confidence

Per-request credential fetch, automatic discard, immutable audit log. Your agent holds zero secrets — even if compromised.

Your Agent
Identark Gateway
Encrypted Vault
LLM Provider
agent carries: session_id only  ·  no API keys  ·  no secrets

Built for production agents

Everything you need to run secure, auditable AI workloads — from dev to enterprise.

Core

Zero-secret execution

Every LLM call routes through the gateway. Credentials are fetched from Vault per-request and discarded immediately. Your agent process never holds a secret — not even in memory between calls.

Immutable audit log

Every call logged: agent ID, tool, timestamp, cost, latency. Append-only. Your compliance team will finally sign off.

Cost caps per agent

Set a hard limit per session or per month. When the cap hits, the gateway raises a typed exception. No runaway bills.

Streaming via SSE

Real-time token streaming with no latency penalty. Same secure gateway, same zero-credential model.

UK/EU data residency

Azure UK South, AWS London, Mistral EU. Your data stays in the region your contracts require.

MockGateway for testing

Built-in test gateway lets you unit test every agent behaviour — no real credentials, no network, no cost.

How Identark differs

LiteLLM and Portkey solve model routing. Identark solves credential isolation — a different layer, a different problem.

Capability Identark LiteLLM Portkey Direct OpenAI SDK
Agent holds zero API keys ✓ Yes ✗ Proxy key required ✗ Proxy key required ✗ Full key in env
Per-request credential fetch ✓ Yes ✗ No ✗ No ✗ No
Immutable audit log ✓ Yes ~ Request logs only ✓ Yes ✗ No
Multi-LLM provider routing ~ OpenAI + Anthropic ✓ 100+ models ✓ 1600+ models ✗ One provider
Open source SDK ✓ MIT ✓ MIT ~ Partial ✓ Yes
Cost caps per agent ✓ Yes ~ Budget limits ✓ Yes ✗ No
Works with LangChain / CrewAI ✓ Yes ✓ Yes ✓ Yes ~ Manual

Works with your stack

Drop-in adapters for the frameworks your team already uses.

Simple pricing

No markup on LLM costs. You pay your providers directly. We charge for the gateway.

Free
$0
For open-source projects and evaluation. No credit card required.
  • 500 executions / month
  • 1 organisation
  • OpenAI + Anthropic
  • Community support
  • Full audit log access
Request Access
Enterprise
Custom
For regulated industries with compliance, SSO, and on-prem requirements.
  • Unlimited executions
  • Unlimited orgs
  • SSO / SAML / SCIM
  • SLA guarantee
  • On-prem deployment
  • Dedicated support
Contact sales

Common questions

Why not just use environment variables?

Env vars are accessible to your agent's entire process. A prompt injection attack, a confused agent, or a malicious tool call can read and exfiltrate them. Identark ensures your agent only holds a session token — the actual credentials never enter the agent process.

Does this add latency to my LLM calls?

Our benchmark shows under 1.5ms of SDK overhead per call — less than 0.001% of typical LLM response time (300–800ms). The vault fetch happens in parallel with request setup and is not on the critical path.

Can I use Identark alongside Portkey or LiteLLM?

Yes. They solve different problems. Portkey and LiteLLM handle model routing and observability. Identark handles credential isolation at the agent execution layer. They complement each other — your Portkey key can itself be stored in Identark's vault.

What happens if the gateway goes down?

Your agent's LLM calls fail with a typed NetworkError that you handle in your retry logic. We run on Fly.io with health checks and auto-restart. Enterprise customers get SLAs and multi-region failover.

Do you store my prompts or responses?

No. Audit logs record metadata only: timestamps, token counts, costs, agent IDs. Prompt and response content is never stored. Enterprise customers configure full data retention and deletion policies.

How does the open-source SDK relate to Identark Cloud?

The SDK is MIT licensed and free forever. DirectGateway works locally with no account. ControlPlaneGateway connects to our hosted cloud — or you can self-host the control plane for enterprise.

Zero secrets.
Full audit trail.

We're onboarding 10 developers for our private beta.

$ pip install identark-sdk