Skip to content

evalops/gate

Repository files navigation

Gate

Gate is a multi-protocol, policy-driven privileged access gateway. It sits between users and upstream databases, APIs, and servers, evaluates OPA/Rego policies, masks sensitive data, and centralizes audit, webhook, and recording workflows in a control plane.

                         ┌─────────────────────────────────┐
                         │          gate-control           │
                         │  Admin UI + HTTP + gRPC + DB    │
                         │                                 │
                         │  Resources  Policies  Audit     │
                         │  Identities Groups  Webhooks    │
                         │  Organizations  Recordings      │
                         └──────────────┬──────────────────┘
                                        │ gRPC / Protobuf
                         sync / audit / │ registration / heartbeat
                                        │
                         ┌──────────────▼──────────────────┐
   ┌──────────┐          │         gate-connector          │          ┌──────────┐
   │  Clients  │────────▶│ Policy evaluation + proxying    │────────▶│ Upstream │
   │ psql/curl │◀────────│ PostgreSQL / MongoDB / MySQL    │◀────────│ services │
   └──────────┘          │ HTTP / SSH / MCP / masking      │          └──────────┘
                         └─────────────────────────────────┘

Highlights

  • PostgreSQL, MongoDB, MySQL, HTTP, SSH, and MCP proxying.
  • Rego v1 policy evaluation at session, pre_request, and post_request stages.
  • Central management for resources, policies, identities, groups, organizations, scoped API keys, webhooks, access requests, audit logs, and recordings.
  • gRPC/Protobuf control-plane contracts with generated Go stubs.
  • Embedded admin UI plus HTTP /api/v1/* compatibility endpoints over the same backend RPC logic.
  • Dedicated connector health, readiness, and Prometheus metrics endpoints.
  • Durable dead-letter storage for async delivery paths and optional session recording storage.

Transport Model

gate-control exposes three surfaces on the same listener:

  • Embedded admin UI at /
  • HTTP health and admin endpoints such as /health and /api/v1/*
  • gRPC services gate.v1.ControlPlaneService and gate.v1.ConnectorService

The HTTP admin handlers are thin compatibility adapters over the gRPC services, so the web UI, HTTP clients, and Go RPC clients all share the same business logic and auth rules.

Connector-to-control-plane traffic now uses gRPC/Protobuf for:

  • Policy sync
  • Audit ingestion
  • Connector registration
  • Connector heartbeat
  • Connector deregistration

Relevant sources:

  • Protobuf contracts: proto/gate/v1/*.proto
  • Generated Go stubs: internal/gen/gate/v1
  • Shared Go admin client: internal/controlplaneclient

Quick Start

Docker Compose

For the fastest local stack:

docker compose up --build

This starts:

  • gate-control on http://localhost:8080
  • gate-connector PostgreSQL listener on localhost:6432
  • connector health and metrics on :8081 inside the container (not published by default)
  • control-plane Postgres on localhost:5433
  • sample upstream Postgres on localhost:5434

The compose stack waits for a healthy control plane before starting the connector, and both Gate services publish container health checks so docker compose up --wait is race-free.

The checked-in docker-compose.yml is a local dev setup and runs the control plane with GATE_INSECURE=true. The sample connector config already points at the compose services and loads policies/example.rego for all three policy stages.

Example connection through the connector:

PGPASSWORD=app psql -h localhost -p 6432 -U app -d appdb

Run Manually

Both checked-in config files support environment-variable expansion:

  • configs/control.yaml
  • configs/connector.yaml

Run the binaries directly:

go run ./cmd/gate-control -config configs/control.yaml
go run ./cmd/gate-connector -config configs/connector.yaml

Authentication And Tenancy

  • HTTP and gRPC both use Bearer auth.
  • api_key in the control-plane config is the system/root key.
  • Scoped organization API keys support roles such as readonly, editor, admin, and connector.
  • System callers can apply org scope with X-Org-ID on HTTP requests or x-org-id gRPC metadata.
  • If api_key is empty, you must start gate-control with --insecure or GATE_INSECURE=true; every unauthenticated request is logged at warn level. This is only for local development.

Configuration

The sample configs in configs/ are the best starting point. The fields below are the ones that matter most when wiring a real deployment.

Control Plane

listen_addr: ":8080"
api_key: "${GATE_API_KEY}"

database:
  dsn: "postgres://${GATE_DB_USER}:${GATE_DB_PASSWORD}@${GATE_DB_HOST}:${GATE_DB_PORT}/${GATE_DB_NAME}?sslmode=disable"
  max_open_conns: 25
  min_conns: 5

recording:
  dir: ".data/recordings" # optional

delivery:
  dead_letter_dir: ".data/control-deadletters" # optional

slack: # optional
  bot_token: "${GATE_SLACK_BOT_TOKEN}"
  signing_secret: "${GATE_SLACK_SIGNING_SECRET}"
  default_approval_channel: "C0123456789"
  org_approval_channels:
    acme-prod: "C0987654321"
  resource_approval_channels:
    resource-123: "C0246813579"
  requester_mappings:
    alice@example.com: "U0123456789"

logging:
  level: "info"
  format: "json"

Connector

listen_addr: ":6432"
health_addr: ":8081"

control_plane:
  url: "http://localhost:8080"
  token: "${GATE_CONTROL_TOKEN}"
  sync_interval: "30s"
  sync_jitter: "5s" # optional: spread connector polls to avoid thundering herds

resources:
  - name: "primary-db"
    protocol: "postgres"
    host: "localhost"
    port: 5432
    database: "appdb"
    require_access_grant: true # optional: require an approved active grant before session setup
  - name: "agent-tools"
    protocol: "mcp"
    host: "mcp.internal"
    port: 8080
    listen_port: 7443         # required for MCP resources
    endpoint_path: "/mcp"     # optional: defaults to /mcp
    upstream_tls: true        # optional: dial the MCP server over HTTPS
    require_access_grant: true

policies:
  - path: "policies/example.rego"
    stage: "session"
  - path: "policies/example.rego"
    stage: "pre_request"
  - path: "policies/example.rego"
    stage: "post_request"

tls:
  enabled: false # optional

recording:
  dir: ".data/recordings" # optional

delivery:
  dead_letter_dir: ".data/connector-deadletters" # optional

logging:
  level: "info"
  format: "json"

MCP resources use dedicated listeners because they ride over Streamable HTTP rather than the connector's shared protocol-detection path. When session recordings are enabled, MCP interactions are stored alongside SSH recordings and can be replayed through the control plane recording APIs.

Control-plane migrations are embedded in the binary by default. Set database.migrations_path only when you intentionally want to override the built-in migration set during development.

Approved access requests can also carry scoped MCP grants. A grant with scopes such as {operation: "tool", target: "customer_lookup"} lets a connector enforce target-aware access directly, and tools/list, resources/list, and prompts/list responses are filtered down to the approved targets when the grant is narrower than the whole resource.

Slack JIT Approvals

gate-control can post new JIT access requests to Slack and accept approve or deny button clicks back on /integrations/slack/actions.

Routing precedence is:

  • resource-specific channel from slack.resource_approval_channels
  • org-specific channel from slack.org_approval_channels
  • fallback slack.default_approval_channel

Requester notifications use, in order:

  • explicit slack.requester_mappings
  • a direct Slack user ID in requested_by
  • Slack users.lookupByEmail when requested_by is an email address

See docs/slack-jit-approvals.md for the required Slack app scopes, interactivity setup, and an end-to-end configuration example.

Gate also emits outbound webhook events for access-request lifecycle changes:

  • access.requested
  • access.approved
  • access.denied

Each event carries the request id, org_id, requester/subject fields, resource_id, reason, duration, status, scopes, and timestamps. Approval events also include approved_by and expires_at; denial events include denied_by. That payload is designed to support downstream approval bridges such as Slack or Jira without forcing an extra control-plane read before rendering a notification or status update.

Policy Model

Gate evaluates Rego policies in three stages:

  • session: allow or block a connection/session before it is established
  • pre_request: allow, block, or rewrite a request/query before forwarding upstream
  • post_request: allow or transform the upstream response, including masking

post_request filters are currently enforced for HTTP JSON array responses. PostgreSQL, MongoDB, and MySQL post-request enforcement currently supports masking, while MCP rejects response filters and rewrites instead of silently applying them.

Managed policies in the control plane move through draft, dry_run, and active states. Resource targeting supports exact resource names as well as shell-style glob patterns such as prod-*, *-staging, and db-??.

Minimal example:

package formal.v2

import rego.v1

default pre_request := {"action": "allow"}

pre_request := {"action": "block", "reason": "destructive queries are not allowed"} if {
	input.query.type == "drop"
}

post_request := {
	"action": "mask",
	"masks": [
		{"column": "email", "strategy": "partial"},
		{"column": "ssn", "strategy": "redact"},
	],
}

Masking strategies include redact, partial, hash, and email. HTTP post_request policies may also return filters rules with eq, neq, in, and not_in operators to drop non-matching objects from JSON array responses before masking is applied.

For MCP traffic, Gate also enriches policy input and audit details with mcp.method, mcp.namespace, mcp.action, mcp.operation, the generic mcp.target field, and specific target fields such as mcp.tool_name, mcp.resource_uri, and mcp.prompt_name.

For MongoDB traffic, Gate parses OP_MSG and legacy OP_QUERY commands into input.query.operation, input.query.database, input.query.collection, and input.query.filter, and applies post-request masking to matching reply fields before they reach the client.

Gate exposes authenticated identity fields under input.user.*. For verified OIDC/JWT identities, Gate currently reads:

{
  "sub": "user-123",
  "name": "alice",
  "email": "alice@example.com",
  "groups": ["admins", "engineering"]
}

The groups array is mapped directly to input.user.groups, so policies can use expressions such as "admins" in input.user.groups. HTTP and MCP listeners also normalize X-Forwarded-Groups into the same input.user.groups field when they sit behind an authenticating proxy.

See policies/example.rego for the sample policy used by the local connector config. See examples/policies/README.md for ready-to-load policies covering IP allowlists, MongoDB collection allowlists, PII masking, business hours, audit-only dry runs, DDL restrictions, and read-only enforcement. See docs/policy-rollout.md for the safe promotion path from draft to dry_run to active, including rollback guidance and a worked IP allowlist example. See docs/runbooks.md for operational runbooks covering policy denials, connector registration failures, audit sink recovery, pool contention, and TLS rotation. See docs/mcp-gateway-quickstart.md for an end-to-end MCP gateway example built with the official Go SDK. See docs/mcp-proxy-architecture.md for the MCP gateway request, policy, audit, masking, and grant-scoping flow. See docs/control-plane-mcp.md for the read-only Gate control-plane MCP endpoint. See docs/webhook-signatures.md for webhook signing semantics and verification examples in Go, Python, and Node. See docs/slack-jit-approvals.md for Slack approval workflow setup and callback configuration.

See docs/benchmarks.md for the reproducible latency benchmark harness and docs/compliance-mappings.md for the initial compliance capability mapping. See docs/acra-porting-plan.md for the research-backed roadmap for porting Acra-style data protection patterns into Gate.

Pool Metrics

Connector health endpoints expose per-resource and per-upstream pool metrics:

  • gate_connpool_active_connections
  • gate_connpool_waiters
  • gate_connpool_wait_duration_seconds
  • gate_connpool_exhausted_total

Use them together when sizing connector pools:

  • If gate_connpool_waiters stays above 0, requests are queuing for that upstream.
  • If gate_connpool_exhausted_total keeps climbing, the pool is saturating under real traffic instead of only during spikes.
  • If gate_connpool_wait_duration_seconds shows a growing tail, add headroom before users start feeling connection setup latency.
  • If gate_connpool_active_connections is pinned high alongside waiters, scale out connectors or raise the per-upstream pool limit in your next rollout.

Latency Metrics

Connector metrics also expose latency histograms for policy evaluation and end-to-end request handling:

  • gate_policy_evaluation_duration_seconds{protocol,resource,stage,action}
  • gate_request_duration_seconds{protocol,resource}
  • gate_connpool_wait_duration_seconds{resource,upstream}

PromQL examples:

  • p95 end-to-end latency by protocol/resource: histogram_quantile(0.95, sum by (le, protocol, resource) (rate(gate_request_duration_seconds_bucket[5m])))
  • p95 policy evaluation latency by protocol/resource/stage/action: histogram_quantile(0.95, sum by (le, protocol, resource, stage, action) (rate(gate_policy_evaluation_duration_seconds_bucket[5m])))
  • p95 upstream pool wait by resource/upstream: histogram_quantile(0.95, sum by (le, resource, upstream) (rate(gate_connpool_wait_duration_seconds_bucket[5m])))

API Surface

Top-level HTTP admin areas under /api/v1/ include:

  • resources
  • policies
  • organizations
  • identities
  • groups
  • connectors
  • webhooks
  • access-requests
  • audit
  • recordings

The control plane also exposes a read-only MCP endpoint on /mcp. It uses the same Bearer API keys and optional X-Org-ID scoping as the HTTP and gRPC admin surfaces, and currently includes tools for listing resources, policies, connectors, access requests, querying audit logs, and simulating policies.

The gRPC contracts are defined in:

  • proto/gate/v1/controlplane.proto
  • proto/gate/v1/connector.proto

Domain messages are split across:

  • proto/gate/v1/resource_policy.proto
  • proto/gate/v1/identity_group.proto
  • proto/gate/v1/tenant.proto
  • proto/gate/v1/webhook_access.proto
  • proto/gate/v1/audit.proto
  • proto/gate/v1/recording.proto

Development

Prerequisites:

  • Go 1.25+
  • Docker for local compose and integration tests
  • golangci-lint v2 for make lint

Common commands:

make build
make test
make test-race
make lint
make generate-proto
make verify-generated
make docker
make test-e2e

Notes:

  • make generate-proto regenerates all Go stubs from proto/gate/v1/*.proto.
  • make verify-generated is what CI uses to ensure generated files are up to date.
  • make test-e2e brings up docker-compose.test.yml, runs the integration suite with the integration build tag, then tears the stack down.
  • docker compose -f docker-compose.test.yml --profile stack up --build --wait brings up the same control-plane and connector health-gated stack used in local compose, alongside the test databases.

Repository Layout

gate/
├── cmd/
│   ├── gate-connector/              # Connector binary entrypoint
│   ├── gate-control/                # Control-plane binary entrypoint
│   └── gate-policy/                 # Policy tooling
├── configs/                         # Sample runtime configs
├── internal/
│   ├── connector/                   # Proxy runtime, routing, TLS, health, metrics
│   ├── controlplane/                # HTTP server, gRPC services, embedded UI bridge
│   ├── controlplaneclient/          # Shared Go admin client for ControlPlaneService
│   ├── enforcement/                 # Masking, filtering, rewrite helpers
│   ├── gen/gate/v1/                 # Generated protobuf and gRPC stubs
│   ├── policy/                      # OPA/Rego policy engine
│   ├── protocol/                    # Postgres, MySQL, HTTP, SSH protocol handlers
│   ├── recording/                   # Session recording storage/replay primitives
│   ├── store/                       # Postgres and in-memory stores + migrations
│   ├── sync/                        # Connector policy sync and audit shipping
│   └── webhooks/                    # Webhook dispatch and retry/dead-letter handling
├── policies/                        # Example local policies
├── proto/gate/v1/                   # Protobuf contracts
├── scripts/                         # Helper scripts, including protobuf generation
├── Dockerfile                       # Multi-stage connector/control image build
├── docker-compose.yml               # Local dev stack
└── docker-compose.test.yml          # Integration test dependencies

License

Business Source License 1.1. See LICENSE for the current terms.

About

Multi-protocol, policy-driven privileged access proxy for AI agents and humans. PostgreSQL, MongoDB, MySQL, HTTP, SSH, and MCP proxying with OPA/Rego policies, data masking, agent-aware identity, and audit logging.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages