Run Model Context Protocol (MCP) servers on Kubernetes. Each entry in
servers[] becomes its own Deployment+Service and the Unla gateway subchart
exposes the aggregated MCP endpoint. You can run
npx, Python via uv/uvx, or any
container image exposing a port).servers[].stdioBridge. The bridge uses our
ghcr.io/icoretech/mcp-stdio-bridge image (Node + Python + uv + mcp-proxy)
to spawn the stdio program inside the Pod and expose it as an SSE endpoint.Everything is wired into the Unla gateway subchart so you get a single MCP endpoint for all of your servers. Unla provides two components:
mcp-server Service, port 5235) – proxied endpoint that
aggregates every server you define.mcp-server-dashboard Service, port 5234) – UI + API to view
auto-discovered tools, sync configs, and manage tenants.The register job runs after each install/upgrade and pushes the generated
mcpServers routers into Unla so the dashboard immediately sees your changes.
If you prefer only Deployments/Services without a gateway, set
unla.enabled=false.
Most fields in servers[] have defaults. A stdio bridge typically needs only
the package name and the command to launch it:
servers:
- name: stdio-reddit
python:
package: mcp-server-reddit
stdioBridge:
enabled: true
serverCommand: ["uvx", "mcp-server-reddit"]
You can omit register.*, port, and other knobs unless you need to override
them—the defaults cover the common cases. The register job automatically pushes
the router + mcpServers block to Unla so the dashboard picks it up.
servers[].* supports three patterns:
node / python / image: native network servers. They expose HTTP or
WebSocket directly.stdioBridge: wrap a stdio-only tool and convert it to SSE. The bridge image
ships with Node, Python, uv/uvx, and mcp-proxy so commands like
uvx mcp-server-reddit work out of the box.openapi: register an external OpenAPI spec with Unla (no Deployment).Stdio bridges are opt-in per server. When stdioBridge.enabled=true, the chart
runs the bridge container and honors serverCommand/serverArgs. If you’re
using a server that already supports HTTP/WebSocket, skip the bridge and use
node, python, or image directly.
Until clients ship native remote MCP support you can front the gateway with
mcp-remote. Example Codex config
against a trusted LAN endpoint:
[mcp_servers.reddit_gateway]
command = "npx"
args = [
"-y",
"mcp-remote",
"http://your-gateway:5235/gateway/stdio-reddit/sse",
"--allow-http",
"--transport", "sse-only",
"--header", "Accept: application/json, text/event-stream"
]
Restart your client after redeploying or renaming servers so mcp-remote
re-establishes the SSE handshake.
To install with the release name my-mcp from OCI:
helm install my-mcp oci://ghcr.io/icoretech/charts/mcp-server
Or from the GitHub Pages helm repo:
helm repo add icoretech https://icoretech.github.io/helm
helm repo update
helm install my-mcp icoretech/mcp-server
The following table lists the configurable parameters of the chart and their default values.
| Key | Type | Default | Description |
|---|---|---|---|
| fullnameOverride | string | "" |
|
| nameOverride | string | "" |
|
| servers | list | [] |
|
| unla.enabled | bool | true |
When unla.enabled=true (default) the chart also deploys:
mcp-server)mcp-server-dashboard)Security note: on first install the dashboard secret seeds random credentials (stored in
mcp-dashboard-secret). They are reused on subsequent reconciliations vialookup, so Flux users do not get churn. The register job reads the same secret. For production overrideunla.dashboard.SUPER_ADMIN_USERNAMEand_PASSWORDwith your own values.
Port-forward the gateway to talk to every server:
kubectl -n <ns> port-forward svc/<release> 8000:5235
examples/servers.yaml demonstrates Node, Python, stdio bridge, custom image,
and OpenAPI servers in one values file. Fields that can be omitted in real
deployments are annotated inline.