Last updated: March 16, 2026

Build a MCP server that exposes your deployment environment details to AI coding assistants by creating a Node.js server using the Model Context Protocol that returns structured information about cloud provider, regions, container orchestration, and infrastructure configuration. This context enables your AI assistant to generate Terraform, Kubernetes manifests, and deployment workflows that match your actual infrastructure setup.

Table of Contents

Prerequisites

Before you begin, make sure you have the following ready:

Step 1: Understand MCP Servers and Deployment Context

MCP servers act as bridges between AI models and external systems. A deployment environment context server provides information about your infrastructure—cloud provider, region, container orchestration platform, environment variables, secrets management, and networking configuration. With this context, your AI assistant can generate Terraform configurations that match your AWS setup, Kubernetes manifests that respect your existing namespace conventions, or GitHub Actions workflows that deploy to your specific infrastructure.

The protocol follows a request-response pattern where the AI sends a request to your server, and your server returns structured data. Building this server gives you control over exactly what environment information your AI can access.

Step 2: Set Up Your MCP Server Project

Start by creating a new Node.js project for your MCP server:

mkdir deployment-context-mcp && cd deployment-context-mcp
npm init -y
npm install @modelcontextprotocol/server std

Create the main server file:

// server.ts
import { Server } from "@modelcontextprotocol/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/server/stdio.js";

const server = new Server(
  {
    name: "deployment-context-server",
    version: "1.0.0",
  },
  {
    capabilities: {
      tools: {},
    },
  }
);

server.setRequestHandler("tools/list", async () => {
  return {
    tools: [
      {
        name: "get_deployment_context",
        description: "Returns current deployment environment details including cloud provider, region, cluster info, and relevant configuration.",
        inputSchema: {
          type: "object",
          properties: {},
        },
      },
      {
        name: "get_kubernetes_context",
        description: "Returns Kubernetes-specific context including namespace, available services, and ingress configurations.",
        inputSchema: {
          type: "object",
          properties: {},
        },
      },
    ],
  };
});

server.setRequestHandler("tools/call", async (request) => {
  const { name, arguments: args } = request.params;

  if (name === "get_deployment_context") {
    return {
      content: [
        {
          type: "text",
          text: JSON.stringify({
            cloudProvider: "aws",
            region: "us-west-2",
            environment: "production",
            services: [
              { name: "api-gateway", type: "ecs", port: 8080 },
              { name: "auth-service", type: "ecs", port: 3001 },
              { name: "data-processing", type: "lambda" }
            ],
            database: {
              primary: { type: "aurora-postgresql", version: "15.4" },
              cache: { type: "elasticache-redis", version: "7.0" }
            },
            networking: {
              vpcCidr: "10.0.0.0/16",
              privateSubnets: ["10.0.1.0/24", "10.0.2.0/24"],
              loadBalancerArn: "arn:aws:elasticloadbalancing:us-west-2:123456789:loadbalancer/app/prod-lb/abc123"
            }
          }, null, 2),
        },
      ],
    };
  }

  if (name === "get_kubernetes_context") {
    return {
      content: [
        {
          type: "text",
          text: JSON.stringify({
            currentContext: "production",
            namespace: "production",
            availableNamespaces: ["development", "staging", "production"],
            ingressClass: "nginx",
            storageClass: "gp3",
            serviceMesh: "istio"
          }, null, 2),
        },
      ],
    };
  }

  throw new Error(`Unknown tool: ${name}`);
});

const transport = new StdioServerTransport();
await server.connect(transport);

Step 3: Connecting to Your Actual Infrastructure

For production use, replace the hardcoded values with real infrastructure queries. The following examples show how to fetch actual deployment context from different sources.

AWS Integration

Query your AWS environment directly:

import { ECSClient, DescribeClustersCommand } from "@aws-sdk/client-ecs";

async function getAWSContext() {
  const ecs = new ECSClient({ region: process.env.AWS_REGION || "us-west-2" });

  const clusterResponse = await ecs.send(new DescribeClustersCommand({
    clusters: [process.env.ECS_CLUSTER_NAME]
  }));

  const cluster = clusterResponse.clusters[0];

  return {
    cloudProvider: "aws",
    region: process.env.AWS_REGION || "us-west-2",
    ecs: {
      clusterName: cluster.clusterName,
      clusterArn: cluster.clusterArn,
      runningTasks: cluster.runningTasksCount,
      services: cluster.activeServicesCount
    }
  };
}

Kubernetes Integration

Query your current Kubernetes context using the official client:

import * as k8s from "@kubernetes/client-node";

async function getKubernetesContext() {
  const kc = new k8s.Config.fromDefault();
  const k8sApi = kc.makeApiClient(k8s.CoreV1Api);

  const namespaceResponse = await k8sApi.listNamespace();
  const namespaces = namespaceResponse.body.items.map(ns => ns.metadata.name);

  return {
    currentContext: kc.getCurrentContext(),
    namespace: kc.getCurrentNamespace() || "default",
    namespaces,
    storageClass: await getStorageClass(k8sApi),
    ingressClass: await getIngressClass(k8sApi)
  };
}

async function getStorageClass(api: k8s.CoreV1Api) {
  const scResponse = await api.listStorageClass();
  return scResponse.body.items.map(sc => sc.metadata.name);
}

Step 4: Registering Your Server with Claude Code

After building your server, register it with your AI assistant. For Claude Code, add it to your configuration:

# Add to your claude.json or project-specific settings
claude mcp add deployment-context ./server.py

For Cursor, add it through the settings interface under “MCP Servers” with the path to your server executable.

Practical Example: Generating Environment-Aware Code

With your deployment context server running, your AI assistant can now generate infrastructure-aware code. Consider this prompt:

“Create a Kubernetes deployment manifest for a Node.js API service”

Without deployment context, the AI might generate a generic manifest. With your MCP server providing context, it can generate:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-gateway
  namespace: production
  labels:
    app: api-gateway
    environment: production
spec:
  replicas: 3
  selector:
    matchLabels:
      app: api-gateway
  template:
    metadata:
      labels:
        app: api-gateway
    spec:
      containers:
      - name: api-gateway
        image: your-registry/api-gateway:latest
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"
        env:
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: connection-string---
apiVersion: v1
kind: Service
metadata:
 name: api-gateway
 namespace: production
spec:
 type: ClusterIP
 selector:
 app: api-gateway
 ports:
 - port: 80
 targetPort: 8080

The manifest now includes your production namespace, appropriate resource limits, and secret references matching your actual Kubernetes configuration.

Step 5: Extending Your Server

Beyond basic context, consider adding these capabilities:

Security Considerations

When building your deployment context server, follow these practices:

Never expose actual secrets or credentials through your MCP server. Return only metadata and configuration patterns. Implement authentication if your server will be used by multiple team members. Consider rate limiting to prevent abuse, and audit logging to track which tools accessed what information.

Step 6: Test Your Implementation

Verify your server works correctly before using it with your AI assistant:

# Test the server directly
echo '{"jsonrpc": "2.0", "id": 1, "method": "tools/list"}' | node server.js

You should receive a list of available tools. Then test each tool individually to confirm it returns the expected data structure.

Troubleshooting

Configuration changes not taking effect

Restart the relevant service or application after making changes. Some settings require a full system reboot. Verify the configuration file path is correct and the syntax is valid.

Permission denied errors

Run the command with sudo for system-level operations, or check that your user account has the necessary permissions. On macOS, you may need to grant terminal access in System Settings > Privacy & Security.

Connection or network-related failures

Check your internet connection and firewall settings. If using a VPN, try disconnecting temporarily to isolate the issue. Verify that the target server or service is accessible from your network.

Frequently Asked Questions

How long does it take to build a model context protocol server that?

For a straightforward setup, expect 30 minutes to 2 hours depending on your familiarity with the tools involved. Complex configurations with custom requirements may take longer. Having your credentials and environment ready before starting saves significant time.

What are the most common mistakes to avoid?

The most frequent issues are skipping prerequisite steps, using outdated package versions, and not reading error messages carefully. Follow the steps in order, verify each one works before moving on, and check the official documentation if something behaves unexpectedly.

Do I need prior experience to follow this guide?

Basic familiarity with the relevant tools and command line is helpful but not strictly required. Each step is explained with context. If you get stuck, the official documentation for each tool covers fundamentals that may fill in knowledge gaps.

Is this approach secure enough for production?

The patterns shown here follow standard practices, but production deployments need additional hardening. Add rate limiting, input validation, proper secret management, and monitoring before going live. Consider a security review if your application handles sensitive user data.

Where can I get help if I run into issues?

Start with the official documentation for each tool mentioned. Stack Overflow and GitHub Issues are good next steps for specific error messages. Community forums and Discord servers for the relevant tools often have active members who can help with setup problems.

Built by theluckystrike — More at zovo.one