Deploying in a Docker Container
Deployment of the pgEdge Postgres MCP Server is easy; you can get up and running in a test environment in minutes. Before deploying the server, you need to install and obtain:
- a Postgres database (with pg_description support)
- Docker
- an LLM Provider API key: Anthropic, OpenAI, or Ollama (local/free)
In your Postgres database, you'll need to create a LOGIN user for this demo; the user name and password will be shared in the configuration file used for deployment.
Deploying into a Docker Container
After meeting the prerequisites, use the steps that follow to deploy into a Docker container.
Clone the Repository
Clone the pgedge-postgres-mcp repository and navigate into the repository's root directory:
git clone https://github.com/pgEdge/pgedge-postgres-mcp.git
cd pgedge-postgres-mcp
Create a Configuration File
The .env.example file contains a sample configuration file that we can use for deployment; instead of updating the original, we copy the sample file to .env:
cp .env.example .env
Then, edit .env, adding deployment details. In the DATABASE CONNECTION section, provide Postgres connection details:
# ============================================================================
# DATABASE CONNECTION
# ============================================================================
# PostgreSQL connection details
PGEDGE_DB_HOST=your-postgres-host
PGEDGE_DB_PORT=5432
PGEDGE_DB_NAME=your-database-name
PGEDGE_DB_USER=your-database-user
PGEDGE_DB_PASSWORD=your-database-password
PGEDGE_DB_SSLMODE=prefer
Specify the name of your embedding provider in the EMBEDDING PROVIDER CONFIGURATION section:
# ============================================================================
# EMBEDDING PROVIDER CONFIGURATION
# ============================================================================
# Provider for text embeddings: voyage, openai, or ollama
PGEDGE_EMBEDDING_PROVIDER=voyage
# Model to use for embeddings
# Voyage: voyage-3, voyage-3-large (requires API key)
# OpenAI: text-embedding-3-small, text-embedding-3-large (requires API key)
# Ollama: nomic-embed-text, mxbai-embed-large (requires local Ollama)
PGEDGE_EMBEDDING_MODEL=voyage-3
Provide your API key in the LLM API KEYS section:
# ============================================================================
# LLM API KEYS
# ============================================================================
# Anthropic API key (for Claude models and Voyage embeddings)
# Get your key from: https://console.anthropic.com/
PGEDGE_ANTHROPIC_API_KEY=your-anthropic-api-key-here
# OpenAI API key (for GPT models and OpenAI embeddings)
# Get your key from: https://platform.openai.com/
PGEDGE_OPENAI_API_KEY=your-openai-api-key-here
# Ollama server URL (for local models)
# Default: http://localhost:11434 (change if Ollama runs elsewhere)
PGEDGE_OLLAMA_URL=http://localhost:11434
API Key Security
For a production environment, mount API key files instead of using environment variables:
volumes:
- ~/.anthropic-api-key:/app/.anthropic-api-key:ro
During deployment, users are created for the deployment; you can specify user information in the AUTHENTICATION CONFIGURATION section. For a simple test environment, the INIT_USERS property is the simplest configuration:
# ============================================================================
# AUTHENTICATION CONFIGURATION
# ============================================================================
# The server supports both token-based and user-based authentication
# simultaneously. You can initialize both types during container startup.
# Initialize tokens (comma-separated list)
# Use for service-to-service authentication or API access
# Format: token1,token2,token3
# Example: INIT_TOKENS=my-secret-token-1,my-secret-token-2
INIT_TOKENS=
# Initialize users (comma-separated list of username:password pairs)
# Use for interactive user authentication with session tokens
# Format: username1:password1,username2:password2
# Example: INIT_USERS=alice:secret123,bob:secret456
INIT_USERS=
# Client token for CLI access (if using token authentication)
# This should match one of the tokens in INIT_TOKENS
MCP_CLIENT_TOKEN=
You also need to specify the LLM provider information in the LLM CONFIGURATION FOR CLIENTS section:
# ============================================================================
# LLM CONFIGURATION FOR CLIENTS
# ============================================================================
# Default LLM provider for chat clients: anthropic, openai, or ollama
PGEDGE_LLM_PROVIDER=anthropic
# Default LLM model for chat clients
# Anthropic: claude-sonnet-4-20250514, claude-opus-4-20250514, etc.
# OpenAI: gpt-5-main, gpt-4o, gpt-4-turbo, etc.
# Ollama: llama3, mistral, etc.
PGEDGE_LLM_MODEL=claude-sonnet-4-20250514
Deploy the Server
After updating the configuration file, you can start the docker container and deploy the server:
docker-compose up -d
Connect with a Browser
When the deployment completes, use your browser to open http://localhost:8081 and log in with the credentials you set in the INIT_USERS property.
You're ready!
Start asking questions about your database in natural language.
Performing a Health Check
All deployment methods expose a health endpoint:
curl http://localhost:8080/health
Response:
{"status": "ok", "server": "pgedge-postgres-mcp", "version": "1.0.0"}