K0KEYZERO

Use Case

Secret Management for MCP Servers

Prevent credential leakage in MCP tool responses. Wrap MCP servers with KeyZero or use KeyZero as an MCP server for secure secret resolution.

The Model Context Protocol (MCP) connects AI agents to external tools through a structured interface. An MCP server exposes tools -- database queries, API calls, file operations -- that agents invoke by name. Almost every MCP server that touches an external service needs credentials. The way those credentials are managed determines whether your MCP setup is secure or a secret leakage pipeline.

How MCP Servers Get Credentials Today

The typical MCP server configuration in .mcp.json looks like this:

{
  "mcpServers": {
    "github": {
      "command": "node",
      "args": ["./mcp-servers/github-server.js"],
      "env": {
        "GITHUB_TOKEN": "ghp_xxxxxxxxxxxxxxxxxxxx"
      }
    },
    "database": {
      "command": "python",
      "args": ["./mcp-servers/db-server.py"],
      "env": {
        "DATABASE_URL": "postgres://admin:s3cret@prod-db:5432/app"
      }
    }
  }
}

Credentials appear in plaintext in a JSON configuration file that often lives in the project root. This file is committed to Git, shared across team members, and readable by any process with access to the filesystem.

Alternative approaches -- environment variables from .env files, shell profile exports, or command-line arguments visible in ps aux -- have similar exposure problems.

The Deeper Problem: Tool Response Leakage

Even if you manage credentials correctly at the server level, MCP introduces a unique leakage channel: tool responses flow back into the AI agent's context window.

Consider a database MCP server that executes a query. The query fails, and the error message includes the connection string:

Error: connection to postgres://admin:s3cret@prod-db:5432/app
failed: password authentication failed

This error is returned as the tool response. The AI agent receives it, includes it in the conversation context, and the database password is now:

  • Stored in the conversation history
  • Sent to the LLM provider on every subsequent turn
  • Potentially included in the agent's generated output

This is not a hypothetical scenario. Database drivers, HTTP clients, and API libraries routinely include credentials in error messages.

KeyZero Integration Patterns

Pattern 1: Wrapping MCP Servers with kz run

Replace plaintext credentials in .mcp.json with KeyZero-wrapped commands:

{
  "mcpServers": {
    "github": {
      "command": "kz",
      "args": ["run", "--blind", "--", "node", "./mcp-servers/github-server.js"]
    },
    "database": {
      "command": "kz",
      "args": ["run", "--blind", "--", "python", "./mcp-servers/db-server.py"]
    }
  }
}

With the corresponding .keyzero.toml:

[secrets]
GITHUB_TOKEN = { provider = "1password", ref = "op://Dev/GitHub-PAT/token" }
DATABASE_URL = { provider = "vault", ref = "secret/data/dev/postgres/url" }

KeyZero resolves credentials from the configured backends and injects them into the MCP server process. In blind mode, the server receives opaque tokens instead of real values -- if an error message includes a credential, it shows KZ_TOK_xxxx instead of the real password. For a deeper dive into this mechanism, see blind mode explained.

Pattern 2: KeyZero as an MCP Server

Instead of wrapping individual MCP servers, you can run KeyZero itself as an MCP server:

{
  "mcpServers": {
    "keyzero": {
      "command": "kz",
      "args": ["server", "start", "--bundle", "./bundle.yaml", "--mcp"]
    }
  }
}

This exposes two tools to the AI agent:

resolve -- Resolves a secret reference after JWT verification and policy evaluation. Returns the secret value to the agent. Use when the agent genuinely needs to see the value.

fetch -- Resolves the credential and makes an HTTP request on behalf of the agent. The agent receives the response body and status code but never sees the raw credential.

The fetch tool is the preferred pattern for API access. Instead of the agent calling https://api.github.com/repos with a token in the Authorization header, it asks KeyZero to make the request:

Tool: keyzero.fetch
Arguments:
  resource: "secret/data/dev/github/token"
  resolver: "github-api"
  url: "https://api.github.com/repos/myorg/myrepo/issues"
  method: "GET"

The agent gets the list of issues. The GitHub token never enters the conversation.

Connection Control

When running KeyZero as a proxy (blind mode), you can restrict which hosts the proxy is allowed to contact. This prevents a compromised or misbehaving MCP server from exfiltrating credentials to unauthorized endpoints:

[proxy]
allowed_hosts = [
  "api.github.com",
  "api.openai.com",
  "*.amazonaws.com",
]

Requests to hosts not on the allowlist are blocked at the proxy level, before the credential swap occurs.

Example: Securing a GitHub MCP Server

A complete setup for a GitHub MCP server used with Claude Code or Cursor:

# .keyzero.toml
[secrets]
GITHUB_TOKEN = { provider = "1password", ref = "op://Dev/GitHub-PAT/token" }

[proxy]
allowed_hosts = ["api.github.com"]
{
  "mcpServers": {
    "github": {
      "command": "kz",
      "args": ["run", "--blind", "--", "npx", "@modelcontextprotocol/server-github"]
    }
  }
}

The GitHub MCP server starts with an opaque token in GITHUB_TOKEN. It makes API calls through the proxy, which swaps the token for the real PAT. If the server returns an error that includes the token value, the agent sees KZ_TOK_xxxx -- not the real credential. And if the server tries to contact a host other than api.github.com, the proxy blocks it.

Why MCP Servers Need Dedicated Secret Management

MCP servers occupy a unique position in the AI stack: they are trusted with real credentials but their output flows directly into an untrusted context (the LLM). Every tool response is a potential leakage vector. Traditional secret management solves the storage and distribution problem but does nothing about the leakage problem. KeyZero's blind mode and fetch tool address both sides -- credentials are resolved from secure backends and kept out of the communication channel between MCP servers and AI agents. For a comprehensive walkthrough, read securing MCP servers with KeyZero.