Configuration
This chart is built around managing each pgEdge node as a CloudNativePG Cluster.
The chart contains a default clusterSpec in values.yaml which defines the required configuration for deploying pgEdge with CloudNativePG, including:
- deploying with the pgEdge Enterprise Postgres Images.
- loading and initializing required extensions for pgEdge Distributed Postgres.
- setting up required PostgreSQL configuration parameters.
- configuring client certificate authentication for managed users (app, admin, streaming_replica).
- allowing local connections for the app and admin users for testing / development purposes.
The simplest example values file, which deploys a single primary instance for each node, looks like this:
pgEdge:
appName: pgedge
nodes:
- name: n1
hostname: pgedge-n1-rw
- name: n2
hostname: pgedge-n2-rw
- name: n3
hostname: pgedge-n3-rw
clusterSpec:
storage:
size: 1Gi
Each node in the nodes list supports the following properties:
| Property | Required | Description |
|---|---|---|
name |
Yes | Unique identifier for the node (e.g., n1, n2). Used to derive Kubernetes resource names and Spock node names. |
hostname |
Yes | The externally routable hostname for the node. This is stored in Spock's DSN and used by other nodes for replication connections. |
internalHostname |
No | An optional cluster-internal hostname used for connectivity checks during initialization. When specified, the init-spock job uses this address to verify the node is accepting connections, while still using hostname for replication DSNs. Useful in multi-cluster deployments where hostname may be an external IP not routable from within the cluster. |
ordinal |
No | Override the automatically derived node ordinal used for snowflake.node and lolor.node configuration. |
clusterSpec |
No | Node-specific CloudNativePG Cluster configuration that overrides the global clusterSpec. |
As shown, The default clusterSpec can be overridden for all nodes with specific configuration required for your Kubernetes setup.
You can also override the clusterSpec for specific nodes if you require more granular control.
For example, to create a 3-node cluster with 3 instances on node n1 and single instances on nodes n2 and n3, you could use:
pgEdge:
appName: pgedge
nodes:
- name: n1
hostname: pgedge-n1-rw
clusterSpec:
instances: 3
postgresql:
synchronous:
method: any
number: 1
dataDurability: required
- name: n2
hostname: pgedge-n2-rw
- name: n3
hostname: pgedge-n3-rw
This override behavior is enabled via mergeOverwrite in Helm. You should be mindful that lists are replaced, not merged.
If you override a list in the clusterSpec for a node, you must include all required elements in that list, pulling from the values file example, and using helm template to verify your configuration.
For more information about configuring CloudNativePG, see the CloudNativePG documentation.
Users and authentication
The chart creates several PostgreSQL roles, each with a specific purpose. Role management is built on top of CloudNativePG's PostgreSQL Role Management roles and Bootstrap configuration.
| Role | Purpose | Authentication | Configurable |
|---|---|---|---|
| app (owner) | Application database owner. Created by initdb.owner. |
Client certificate (app-client-cert secret) and per-node password (pgedge-n#-<owner> secret, default pgedge-n#-app) |
Yes — via initdb.owner |
| admin | Superuser for database administration. Used by the init-spock job. | Client certificate (admin-client-cert secret) and per-node password (pgedge-n#-<adminUser> secret, default pgedge-n#-admin) |
Yes — via pgEdge.adminUser |
| pgedge | Internal replication user for Spock. Created by the init-spock job. | Client certificate (pgedge-client-cert secret) |
No |
| streaming_replica | Used by CloudNativePG for physical streaming replication within each node's HA cluster. | Client certificate (streaming-replica-client-cert secret) |
No |
Customizing the database name and owner
The default application database is named app with an app owner. To use a different name, set pgEdge.clusterSpec.bootstrap.initdb.database and pgEdge.clusterSpec.bootstrap.initdb.owner:
pgEdge:
clusterSpec:
bootstrap:
initdb:
database: mydb
owner: myuser
When changing the database or owner name, you must also override pg_hba and pg_ident to match, since these are plain values that cannot reference other values:
postgresql:
pg_hba:
- hostssl mydb pgedge 0.0.0.0/0 cert
- hostssl mydb admin 0.0.0.0/0 cert
- hostssl mydb myuser 0.0.0.0/0 cert
- hostssl all streaming_replica all cert map=cnpg_streaming_replica
pg_ident:
- local postgres admin
- local postgres myuser
The client certificate for the owner (app-client-cert secret) will automatically use the correct commonName for the new owner.
Customizing the admin role
To change the admin role name, set pgEdge.adminUser and update the managed role definition to match:
pgEdge:
adminUser: dbadmin
clusterSpec:
managed:
roles:
- name: dbadmin
ensure: present
comment: Admin role
login: true
superuser: true
The adminUser value must match a role defined in managed.roles with superuser: true. As with the database name, you must also update pg_hba and pg_ident to reference the new admin name.
Creating additional users
Additional users can be created using CloudNativePG's managed roles:
pgEdge:
clusterSpec:
managed:
roles:
- name: admin
ensure: present
login: true
superuser: true
- name: readonly
ensure: present
login: true
superuser: false
New roles need corresponding pg_hba entries to connect. Add them to the pg_hba list in your values file.
Password authentication
By default, CloudNativePG generates a unique password for each managed user on each node, stored in a secret named pgedge-n#-<username>. This means the app user will have a different password on pgedge-n1 than on pgedge-n2.
If your application needs a consistent password across all nodes, you can pre-create a Kubernetes secret with the desired password and reference it in the role definition using passwordSecret. See the CloudNativePG managed roles documentation for details.
The pg_hba rules in this chart only allow cert authentication for remote connections, so passwords are only usable via local connections (e.g., kubectl cnpg psql). To fully disable password-based login for a managed role, set disablePassword: true in the role definition:
pgEdge:
clusterSpec:
managed:
roles:
- name: admin
ensure: present
login: true
superuser: true
disablePassword: true
Spock configuration
This chart contains a job to initialize Spock multi-master replication across all nodes once they are all available.
This job runs by default, waiting for any clusters associated with the current deployment to be ready before performing initialization.
If you wish to disable this behavior, you can set pgEdge.initSpock to false.
snowflake.node and lolor.node
This chart automatically configures snowflake.node and lolor.node based on the name property of each node.
For example, a node named n1 will have the following Postgres configuration applied to the node to ensure snowflake and lolor are configured appropriately:
postgresql:
parameters:
lolor.node: "1"
snowflake.node: "1"
If you wish to override this behavior, or plan to utilize alternate naming schemes for your node, you can set the ordinal property for each node:
pgEdge:
appName: pgedge
nodes:
- name: a
hostname: pgedge-a-rw
ordinal: 1
- name: b
hostname: pgedge-b-rw
ordinal: 2
- name: c
hostname: pgedge-c-rw
ordinal: 3
clusterSpec:
storage:
size: 1Gi
Extensions
This chart supports all extensions included in the standard flavor of the pgEdge Enterprise Postgres Image.
By default, shared_preload_libraries contains pg_stat_statements, snowflake, and spock. For additional extensions, you may need to override postgresql.shared_preload_libraries and set additional parameters in postgresql.parameters in your values.yaml to ensure the extension is loaded and configured properly.
Note
Always include spock in shared_preload_libraries, as it is required for core functionality provided by this chart. This chart will call CREATE EXTENSION for spock when initializing each CloudNativePG Cluster.
Values reference
You can customize this Helm chart by specifying configuration parameters in your values.yaml file.
The following table lists all available options and their descriptions.
| Key | Type | Default | Description |
|---|---|---|---|
| pgEdge.adminUser | string | "admin" |
The name of the admin role used for database management and init-spock connections. |
| pgEdge.appName | string | "pgedge" |
Determines the name of resources in the pgEdge cluster. Many other values are derived from this name, so it must be less than or equal to 26 characters in length. |
| pgEdge.clusterSpec | object | {"bootstrap":{"initdb":{"database":"app","encoding":"UTF8","owner":"app","postInitApplicationSQL":["CREATE EXTENSION spock;"],"postInitSQL":[],"postInitTemplateSQL":[]}},"certificates":{"clientCASecret":"client-ca-key-pair","replicationTLSSecret":"streaming-replica-client-cert"},"imageName":"ghcr.io/pgedge/pgedge-postgres:18-spock5-standard","imagePullPolicy":"Always","instances":1,"managed":{"roles":[{"comment":"Admin role","ensure":"present","login":true,"name":"admin","superuser":true}]},"postgresql":{"parameters":{"checkpoint_completion_target":"0.9","checkpoint_timeout":"15min","dynamic_shared_memory_type":"posix","hot_standby_feedback":"on","spock.allow_ddl_from_functions":"on","spock.conflict_log_level":"DEBUG","spock.conflict_resolution":"last_update_wins","spock.enable_ddl_replication":"on","spock.include_ddl_repset":"on","spock.save_resolutions":"on","track_commit_timestamp":"on","track_io_timing":"on","wal_level":"logical","wal_sender_timeout":"5s"},"pg_hba":["hostssl app pgedge 0.0.0.0/0 cert","hostssl app admin 0.0.0.0/0 cert","hostssl app app 0.0.0.0/0 cert","hostssl all streaming_replica all cert map=cnpg_streaming_replica"],"pg_ident":["local postgres admin","local postgres app"],"shared_preload_libraries":["pg_stat_statements","snowflake","spock"]},"projectedVolumeTemplate":{"sources":[{"secret":{"items":[{"key":"tls.crt","mode":384,"path":"pgedge/certificates/tls.crt"},{"key":"tls.key","mode":384,"path":"pgedge/certificates/tls.key"},{"key":"ca.crt","mode":384,"path":"pgedge/certificates/ca.crt"}],"name":"pgedge-client-cert"}}]}} |
Default CloudNativePG Cluster specification applied to all nodes, which can be overridden on a per-node basis using the clusterSpec field in each node definition. |
| pgEdge.externalNodes | list | [] |
Configuration for nodes that are part of the pgEdge cluster, but managed externally to this Helm chart. This can be leveraged for multi-cluster deployments or to wire up existing CloudNativePG Clusters to a pgEdge cluster. |
| pgEdge.extraResources | list | [] |
Array of extra Kubernetes resources to deploy alongside pgEdge (evaluated as templates). Useful for deploying NetworkPolicies, PodMonitors, ConfigMaps, etc. |
| pgEdge.initSpock | bool | true |
Whether or not to run the init-spock job to initialize the pgEdge nodes and subscriptions In multi-cluster deployments, this should only be set to true on the last cluster to be deployed. |
| pgEdge.initSpockImageName | string | "" |
Docker image for the init-spock job. If not set, defaults to ghcr.io/pgedge/pgedge-helm-utils:v |
| pgEdge.initSpockJobConfig.containerSecurityContext | object | {"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":true} |
Container Security context for the init-spock job. Set to a Restricted profile by default. Learn more at https://kubernetes.io/docs/concepts/security/pod-security-standards/ |
| pgEdge.initSpockJobConfig.podSecurityContext | object | {"fsGroup":65532,"runAsNonRoot":true,"seccompProfile":{"type":"RuntimeDefault"}} |
Pod Security context for the init-spock job. Set to a Restricted profile by default. Learn more at https://kubernetes.io/docs/concepts/security/pod-security-standards/ |
| pgEdge.initSpockJobConfig.resetSpock | bool | false |
When true, the init-spock job will drop and recreate all Spock state on every node before reconciling. Use this when bootstrapping from a Barman backup that contains stale Spock configuration. Remove after successful initialization. |
| pgEdge.initSpockJobConfig.timeout | int | 7200 |
Maximum time (in seconds) for the init-spock job to complete. Increase for large databases where initial sync may take longer. |
| pgEdge.nodes | list | [] |
Configuration for each node in the pgEdge cluster. Each node will be deployed as a separate CloudNativePG Cluster. |
| pgEdge.provisionCerts | bool | true |
Whether to deploy cert-manager to manage TLS certificates for the cluster. If false, you must provide your own TLS certificates by creating the secrets defined in clusterSpec.certificates.clientCASecret and clusterSpec.certificates.replicationTLSSecret. |