From d0f42bad340388eb39abca936498167c47d033d9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ege=20G=C3=BCne=C5=9F?= Date: Thu, 26 Feb 2026 17:23:49 +0300 Subject: [PATCH] K8SPG-911: Add pg_tde support This commit adds native pg_tde extension support into operator. **This commit only adds Vault KMS support for pg_tde. KMIP support will be added in future releases.** When pg_tde is enabled and Vault configuration is provided, the operator: - appends pg_tde into shared_preload_libraries, - mounts Vault token and CA secrets into database containers, - runs CREATE EXTENSION in all databases, - creates Vault provider by running pg_tde_add_global_key_provider_vault_v2, - create a global key by running pg_tde_create_key_using_global_key_provider, - sets the default key by running pg_tde_set_default_key_using_global_key_provider. -> Example configuration pg_tde: enabled: true vault: host: https://vault-service.vault-service.svc:8200 mountPath: tde tokenSecret: name: vault-secret key: token caSecret: name: vault-secret key: ca.crt Note that: - Mount path needs to be a KV v2 storage engine. - caSecret is optional and can be omitted if you want to use http. But in my testing I couldn't manage the make vault work without TLS. It responds with HTTP 405 if I disable TLS in vault. - tokenSecret and caSecret can be the same secret or different. Operator doesn't assume anything about the contents of the secrets since you'll need to set secret keys in cr.yaml yourself. - Using a non-root token requires more configuration. Check out pg_tde docs for that. But don't forget to add these in the Vault policy: ``` path "sys/internal/ui/mounts/*" { capabilities = ["read"] } path "sys/mounts/*" { capabilities = ["read"] } ``` -> API changes pg_tde requires more configuration options than other extensions operator supports. This required us make some changes in the extensions API. With these changes, 'spec.extensions.builtin' section is deprecated and all builtin extensions are moved to 'spec.extensions.' (i.e. 'spec.extensions.pg_stat_monitor'). Right now extensions can be enabled/disabled with the old and the new method. If two methods are used at the same time, 'spec.extensions.builtin' takes precedence. -> Status changes A hash will be calculated using pg_tde configuration provided by user. Operator uses this hash to understand if config is changed and it should reconfigure pg_tde. The hash can be found in status.pgTDERevision field of **PostgresCluster** object. This hash will be removed when pg_tde is disabled. Operator also communicates the status of pg_tde with conditions. The condition with type=PGTDEEnabled can be found in both PerconaPGCluster and PostgresCluster statuses. -> Disabling pg_tde Disabling pg_tde is more complex than other extensions: - First of all any encrypted objects must be dropped before disabling. Otherwise DROP EXTENSION will fail with a descriptive error message. **Operator won't drop anything, user needs to do this manually.** - The extension needs to be disabled in two steps: 1. First set pg_tde.enabled=false without removing the vault section. Operator will drop the extension and restart the pods. 2. Then you can remove pg_tde.vault. Database pods will be restarted again to remove secret mounts from containers. - It's recommended to run CHECKPOINT before removing pg_tde.vault. Even though extension is dropped, Postgres might still try to use encrypted objects during recovery after restart and it might try to access token secret. CHECKPOINT helps you prevent this failure case. -> Deleting and recreating clusters If cluster with pg_tde enabled is deleted but PVCs are retained, on recreation you'll see some errors about pg_tde in operator logs. They happen because the vault provider and/or global key already exists. Operator will handle these errors gracefully and configure pg_tde. Same thing applies when pg_tde is disabled and re-enabled. Since both vault provider and global key already exists, operator will handle "already exists" errors and configure pg_tde. The global key name is determined by cluster's .metadata.uid. For example 'global-master-key-ad19534a-d778-460e-ac87-ca38ef5e6755'. This means the key will be changed if cluster is deleted and recreated. As long as the old key and the new key is accessible to pg_tde, this won't cause any issues. pg_tde will handle it as it handles key rotation. -> Validations - You can't set pg_tde.enabled=true without setting pg_tde.vault. - If you already had pg_tde.enabled, you can't remove pg_tde section completely. - If you already had pg_tde.enabled, you can't remove pg_tde.vault section completely. --------- K8SPG-911: pg_tde improvements/fixes - add pg version validation - explicitly disable wal encryption - enable pg_tde in restore job - [e2e] read from all pods after restore - use pg_tde binaries in patroni - fix vault provider change All items except the last is straightforward. Fixing the vault provider change, required a lot of changes. The problem with changing the Vault token in pg_tde was that pg_tde requires both the new and the old token at the same time to perform the change. This is not trivial to achieve on K8s, since operator needs to mount the new secret to the pods and somehow needs the keep the old secret mounted. To achieve this, operator performs provider change in two phases: 1. In the first phase, operator keeps the old secret mounted in the pod and prevents restart. Then it fetches the new secret contents and stores them in temporary files in `/pgdata` directory. Then, operator runs pg_tde_change_global_key_provider_vault_v2. 2. In the second phase, operator mounts the new secret and restarts the pods. Then it runs pg_tde_change_global_key_provider_vault_v2 with standard credential paths. At the end of this phase, temporary files are cleaned up. --- ...ator.crunchydata.com_postgresclusters.yaml | 56 ++ .../pgv2.percona.com_perconapgclusters.yaml | 82 +++ .../pgv2.percona.com_perconapgclusters.yaml | 82 +++ ...ator.crunchydata.com_postgresclusters.yaml | 56 ++ deploy/bundle.yaml | 138 ++++ deploy/cr.yaml | 27 +- deploy/crd.yaml | 138 ++++ deploy/cw-bundle.yaml | 138 ++++ e2e-tests/functions | 202 +++++- e2e-tests/run-pr.csv | 1 + e2e-tests/run-release.csv | 1 + .../00-deploy-operator.yaml | 2 +- .../03-install-all-ext.yaml | 16 +- .../06-uninstall-all-ext.yaml | 16 +- .../custom-extensions/00-deploy-operator.yaml | 2 +- e2e-tests/tests/pg-tde/00-assert.yaml | 24 + .../tests/pg-tde/00-deploy-operator.yaml | 13 + e2e-tests/tests/pg-tde/01-assert.yaml | 8 + e2e-tests/tests/pg-tde/01-deploy-vault.yaml | 11 + e2e-tests/tests/pg-tde/02-assert.yaml | 126 ++++ e2e-tests/tests/pg-tde/02-create-cluster.yaml | 19 + e2e-tests/tests/pg-tde/03-write-data.yaml | 17 + e2e-tests/tests/pg-tde/04-assert.yaml | 17 + .../tests/pg-tde/04-verify-encryption.yaml | 23 + e2e-tests/tests/pg-tde/05-assert.yaml | 31 + e2e-tests/tests/pg-tde/05-create-backup.yaml | 9 + e2e-tests/tests/pg-tde/06-write-data.yaml | 15 + e2e-tests/tests/pg-tde/07-assert.yaml | 30 + e2e-tests/tests/pg-tde/07-create-restore.yaml | 7 + e2e-tests/tests/pg-tde/08-assert.yaml | 30 + e2e-tests/tests/pg-tde/08-read-data.yaml | 22 + e2e-tests/tests/pg-tde/09-assert.yaml | 110 ++++ .../pg-tde/09-change-vault-provider.yaml | 31 + e2e-tests/tests/pg-tde/10-assert.yaml | 12 + .../tests/pg-tde/10-verify-after-change.yaml | 27 + e2e-tests/tests/pg-tde/11-assert.yaml | 103 +++ e2e-tests/tests/pg-tde/11-disable-pgtde.yaml | 27 + e2e-tests/tests/pg-tde/12-assert.yaml | 86 +++ .../tests/pg-tde/12-remove-pgtde-config.yaml | 12 + .../05-sleep-after-operator-update.yaml | 2 +- e2e-tests/vars.sh | 1 + .../controller/postgrescluster/controller.go | 15 +- .../controller/postgrescluster/instance.go | 71 ++ .../controller/postgrescluster/pgbackrest.go | 7 +- .../controller/postgrescluster/postgres.go | 201 +++++- internal/naming/annotations.go | 3 + internal/naming/names.go | 15 + internal/patroni/config.go | 8 + internal/patroni/config_test.go | 43 ++ internal/pgbackrest/config.go | 6 +- internal/pgbackrest/config_test.go | 6 +- internal/pgtde/postgres.go | 281 ++++++++ internal/pgtde/postgres_test.go | 604 ++++++++++++++++++ internal/pgvector/postgres.go | 8 +- internal/postgres/reconcile.go | 61 ++ percona/controller/pgbackup/controller.go | 3 + .../controller/pgcluster/controller_test.go | 164 +++++ .../v2/perconapgcluster_types.go | 98 ++- .../v2/zz_generated.deepcopy.go | 26 + .../v1beta1/postgrescluster_test.go | 6 +- .../v1beta1/postgrescluster_types.go | 33 + .../v1beta1/zz_generated.deepcopy.go | 55 +- 62 files changed, 3421 insertions(+), 63 deletions(-) create mode 100644 e2e-tests/tests/pg-tde/00-assert.yaml create mode 100644 e2e-tests/tests/pg-tde/00-deploy-operator.yaml create mode 100644 e2e-tests/tests/pg-tde/01-assert.yaml create mode 100644 e2e-tests/tests/pg-tde/01-deploy-vault.yaml create mode 100644 e2e-tests/tests/pg-tde/02-assert.yaml create mode 100644 e2e-tests/tests/pg-tde/02-create-cluster.yaml create mode 100644 e2e-tests/tests/pg-tde/03-write-data.yaml create mode 100644 e2e-tests/tests/pg-tde/04-assert.yaml create mode 100644 e2e-tests/tests/pg-tde/04-verify-encryption.yaml create mode 100644 e2e-tests/tests/pg-tde/05-assert.yaml create mode 100644 e2e-tests/tests/pg-tde/05-create-backup.yaml create mode 100644 e2e-tests/tests/pg-tde/06-write-data.yaml create mode 100644 e2e-tests/tests/pg-tde/07-assert.yaml create mode 100644 e2e-tests/tests/pg-tde/07-create-restore.yaml create mode 100644 e2e-tests/tests/pg-tde/08-assert.yaml create mode 100644 e2e-tests/tests/pg-tde/08-read-data.yaml create mode 100644 e2e-tests/tests/pg-tde/09-assert.yaml create mode 100644 e2e-tests/tests/pg-tde/09-change-vault-provider.yaml create mode 100644 e2e-tests/tests/pg-tde/10-assert.yaml create mode 100644 e2e-tests/tests/pg-tde/10-verify-after-change.yaml create mode 100644 e2e-tests/tests/pg-tde/11-assert.yaml create mode 100644 e2e-tests/tests/pg-tde/11-disable-pgtde.yaml create mode 100644 e2e-tests/tests/pg-tde/12-assert.yaml create mode 100644 e2e-tests/tests/pg-tde/12-remove-pgtde-config.yaml create mode 100644 internal/pgtde/postgres.go create mode 100644 internal/pgtde/postgres_test.go diff --git a/build/crd/crunchy/generated/postgres-operator.crunchydata.com_postgresclusters.yaml b/build/crd/crunchy/generated/postgres-operator.crunchydata.com_postgresclusters.yaml index d46bd28fdd..b40e8b5986 100644 --- a/build/crd/crunchy/generated/postgres-operator.crunchydata.com_postgresclusters.yaml +++ b/build/crd/crunchy/generated/postgres-operator.crunchydata.com_postgresclusters.yaml @@ -13594,6 +13594,53 @@ spec: type: boolean extensions: properties: + pg_tde: + properties: + enabled: + type: boolean + vault: + properties: + caSecret: + description: Name of the secret that contains the CA certificate + for SSL verification. + properties: + key: + type: string + name: + type: string + required: + - key + - name + type: object + host: + description: Host of Vault server. + type: string + mountPath: + default: secret/data + description: The mount point on the Vault server where + the key provider should store the keys. + type: string + tokenSecret: + description: Name of the secret that contains the access + token with read and write access to the mount path. + properties: + key: + type: string + name: + type: string + required: + - key + - name + type: object + required: + - host + - tokenSecret + type: object + type: object + x-kubernetes-validations: + - message: vault is required for enabling pg_tde + rule: '!has(self.enabled) || (has(self.enabled) && self.enabled + == false) || has(self.vault)' pgAudit: type: boolean pgRepack: @@ -13605,6 +13652,11 @@ spec: pgvector: type: boolean type: object + x-kubernetes-validations: + - message: to disable pg_tde first set enabled=false without removing + vault and wait for pod restarts + rule: '!has(oldSelf.pg_tde) || !has(oldSelf.pg_tde.vault) || !has(oldSelf.pg_tde.enabled) + || !oldSelf.pg_tde.enabled || has(self.pg_tde.vault)' image: description: |- The image name to use for PostgreSQL containers. When omitted, the value @@ -30971,6 +31023,10 @@ spec: description: The PostgreSQL system identifier reported by Patroni. type: string type: object + pgTDERevision: + description: Identifies the pg_tde configuration that have been installed + into PostgreSQL. + type: string pgbackrest: description: Status information for pgBackRest properties: diff --git a/build/crd/percona/generated/pgv2.percona.com_perconapgclusters.yaml b/build/crd/percona/generated/pgv2.percona.com_perconapgclusters.yaml index 862fc4da16..983ff184dd 100644 --- a/build/crd/percona/generated/pgv2.percona.com_perconapgclusters.yaml +++ b/build/crd/percona/generated/pgv2.percona.com_perconapgclusters.yaml @@ -13693,6 +13693,8 @@ spec: description: The specification of extensions. properties: builtin: + description: 'Deprecated: Use extensions. instead. + This field will be removed after 2.11.0.' properties: pg_audit: type: boolean @@ -13722,6 +13724,78 @@ spec: description: PullPolicy describes a policy for if/when to pull a container image type: string + pg_audit: + properties: + enabled: + type: boolean + type: object + pg_repack: + properties: + enabled: + type: boolean + type: object + pg_stat_monitor: + properties: + enabled: + type: boolean + type: object + pg_stat_statements: + properties: + enabled: + type: boolean + type: object + pg_tde: + properties: + enabled: + type: boolean + vault: + properties: + caSecret: + description: Name of the secret that contains the CA certificate + for SSL verification. + properties: + key: + type: string + name: + type: string + required: + - key + - name + type: object + host: + description: Host of Vault server. + type: string + mountPath: + default: secret/data + description: The mount point on the Vault server where + the key provider should store the keys. + type: string + tokenSecret: + description: Name of the secret that contains the access + token with read and write access to the mount path. + properties: + key: + type: string + name: + type: string + required: + - key + - name + type: object + required: + - host + - tokenSecret + type: object + type: object + x-kubernetes-validations: + - message: vault is required for enabling pg_tde + rule: '!has(self.enabled) || (has(self.enabled) && self.enabled + == false) || has(self.vault)' + pgvector: + properties: + enabled: + type: boolean + type: object storage: properties: bucket: @@ -13804,6 +13878,11 @@ spec: type: string type: object type: object + x-kubernetes-validations: + - message: to disable pg_tde first set enabled=false without removing + vault and wait for pod restarts + rule: '!has(oldSelf.pg_tde) || !has(oldSelf.pg_tde.vault) || !has(oldSelf.pg_tde.enabled) + || !oldSelf.pg_tde.enabled || has(self.pg_tde.vault)' image: description: The image name to use for PostgreSQL containers. type: string @@ -28778,6 +28857,9 @@ spec: - postgresVersion type: object x-kubernetes-validations: + - message: pg_tde is only supported for PG17 and above + rule: '!has(self.extensions) || !has(self.extensions.pg_tde) || !has(self.extensions.pg_tde.enabled) + || !self.extensions.pg_tde.enabled || self.postgresVersion >= 17' - message: PostgresVersion must be >= 15 if grantPublicSchemaAccess exists and is true rule: '!has(self.users) || self.postgresVersion >= 15 || self.users.all(u, diff --git a/config/crd/bases/pgv2.percona.com_perconapgclusters.yaml b/config/crd/bases/pgv2.percona.com_perconapgclusters.yaml index 04569d5887..992057d464 100644 --- a/config/crd/bases/pgv2.percona.com_perconapgclusters.yaml +++ b/config/crd/bases/pgv2.percona.com_perconapgclusters.yaml @@ -14332,6 +14332,8 @@ spec: description: The specification of extensions. properties: builtin: + description: 'Deprecated: Use extensions. instead. + This field will be removed after 2.11.0.' properties: pg_audit: type: boolean @@ -14361,6 +14363,78 @@ spec: description: PullPolicy describes a policy for if/when to pull a container image type: string + pg_audit: + properties: + enabled: + type: boolean + type: object + pg_repack: + properties: + enabled: + type: boolean + type: object + pg_stat_monitor: + properties: + enabled: + type: boolean + type: object + pg_stat_statements: + properties: + enabled: + type: boolean + type: object + pg_tde: + properties: + enabled: + type: boolean + vault: + properties: + caSecret: + description: Name of the secret that contains the CA certificate + for SSL verification. + properties: + key: + type: string + name: + type: string + required: + - key + - name + type: object + host: + description: Host of Vault server. + type: string + mountPath: + default: secret/data + description: The mount point on the Vault server where + the key provider should store the keys. + type: string + tokenSecret: + description: Name of the secret that contains the access + token with read and write access to the mount path. + properties: + key: + type: string + name: + type: string + required: + - key + - name + type: object + required: + - host + - tokenSecret + type: object + type: object + x-kubernetes-validations: + - message: vault is required for enabling pg_tde + rule: '!has(self.enabled) || (has(self.enabled) && self.enabled + == false) || has(self.vault)' + pgvector: + properties: + enabled: + type: boolean + type: object storage: properties: bucket: @@ -14443,6 +14517,11 @@ spec: type: string type: object type: object + x-kubernetes-validations: + - message: to disable pg_tde first set enabled=false without removing + vault and wait for pod restarts + rule: '!has(oldSelf.pg_tde) || !has(oldSelf.pg_tde.vault) || !has(oldSelf.pg_tde.enabled) + || !oldSelf.pg_tde.enabled || has(self.pg_tde.vault)' image: description: The image name to use for PostgreSQL containers. type: string @@ -29417,6 +29496,9 @@ spec: - postgresVersion type: object x-kubernetes-validations: + - message: pg_tde is only supported for PG17 and above + rule: '!has(self.extensions) || !has(self.extensions.pg_tde) || !has(self.extensions.pg_tde.enabled) + || !self.extensions.pg_tde.enabled || self.postgresVersion >= 17' - message: PostgresVersion must be >= 15 if grantPublicSchemaAccess exists and is true rule: '!has(self.users) || self.postgresVersion >= 15 || self.users.all(u, diff --git a/config/crd/bases/postgres-operator.crunchydata.com_postgresclusters.yaml b/config/crd/bases/postgres-operator.crunchydata.com_postgresclusters.yaml index 20fd0d3f9f..2e010a371a 100644 --- a/config/crd/bases/postgres-operator.crunchydata.com_postgresclusters.yaml +++ b/config/crd/bases/postgres-operator.crunchydata.com_postgresclusters.yaml @@ -13554,6 +13554,53 @@ spec: type: boolean extensions: properties: + pg_tde: + properties: + enabled: + type: boolean + vault: + properties: + caSecret: + description: Name of the secret that contains the CA certificate + for SSL verification. + properties: + key: + type: string + name: + type: string + required: + - key + - name + type: object + host: + description: Host of Vault server. + type: string + mountPath: + default: secret/data + description: The mount point on the Vault server where + the key provider should store the keys. + type: string + tokenSecret: + description: Name of the secret that contains the access + token with read and write access to the mount path. + properties: + key: + type: string + name: + type: string + required: + - key + - name + type: object + required: + - host + - tokenSecret + type: object + type: object + x-kubernetes-validations: + - message: vault is required for enabling pg_tde + rule: '!has(self.enabled) || (has(self.enabled) && self.enabled + == false) || has(self.vault)' pgAudit: type: boolean pgRepack: @@ -13565,6 +13612,11 @@ spec: pgvector: type: boolean type: object + x-kubernetes-validations: + - message: to disable pg_tde first set enabled=false without removing + vault and wait for pod restarts + rule: '!has(oldSelf.pg_tde) || !has(oldSelf.pg_tde.vault) || !has(oldSelf.pg_tde.enabled) + || !oldSelf.pg_tde.enabled || has(self.pg_tde.vault)' image: description: |- The image name to use for PostgreSQL containers. When omitted, the value @@ -30869,6 +30921,10 @@ spec: description: The PostgreSQL system identifier reported by Patroni. type: string type: object + pgTDERevision: + description: Identifies the pg_tde configuration that have been installed + into PostgreSQL. + type: string pgbackrest: description: Status information for pgBackRest properties: diff --git a/deploy/bundle.yaml b/deploy/bundle.yaml index cb4c025892..6abca1287d 100644 --- a/deploy/bundle.yaml +++ b/deploy/bundle.yaml @@ -14629,6 +14629,8 @@ spec: description: The specification of extensions. properties: builtin: + description: 'Deprecated: Use extensions. instead. + This field will be removed after 2.11.0.' properties: pg_audit: type: boolean @@ -14658,6 +14660,78 @@ spec: description: PullPolicy describes a policy for if/when to pull a container image type: string + pg_audit: + properties: + enabled: + type: boolean + type: object + pg_repack: + properties: + enabled: + type: boolean + type: object + pg_stat_monitor: + properties: + enabled: + type: boolean + type: object + pg_stat_statements: + properties: + enabled: + type: boolean + type: object + pg_tde: + properties: + enabled: + type: boolean + vault: + properties: + caSecret: + description: Name of the secret that contains the CA certificate + for SSL verification. + properties: + key: + type: string + name: + type: string + required: + - key + - name + type: object + host: + description: Host of Vault server. + type: string + mountPath: + default: secret/data + description: The mount point on the Vault server where + the key provider should store the keys. + type: string + tokenSecret: + description: Name of the secret that contains the access + token with read and write access to the mount path. + properties: + key: + type: string + name: + type: string + required: + - key + - name + type: object + required: + - host + - tokenSecret + type: object + type: object + x-kubernetes-validations: + - message: vault is required for enabling pg_tde + rule: '!has(self.enabled) || (has(self.enabled) && self.enabled + == false) || has(self.vault)' + pgvector: + properties: + enabled: + type: boolean + type: object storage: properties: bucket: @@ -14740,6 +14814,11 @@ spec: type: string type: object type: object + x-kubernetes-validations: + - message: to disable pg_tde first set enabled=false without removing + vault and wait for pod restarts + rule: '!has(oldSelf.pg_tde) || !has(oldSelf.pg_tde.vault) || !has(oldSelf.pg_tde.enabled) + || !oldSelf.pg_tde.enabled || has(self.pg_tde.vault)' image: description: The image name to use for PostgreSQL containers. type: string @@ -29714,6 +29793,9 @@ spec: - postgresVersion type: object x-kubernetes-validations: + - message: pg_tde is only supported for PG17 and above + rule: '!has(self.extensions) || !has(self.extensions.pg_tde) || !has(self.extensions.pg_tde.enabled) + || !self.extensions.pg_tde.enabled || self.postgresVersion >= 17' - message: PostgresVersion must be >= 15 if grantPublicSchemaAccess exists and is true rule: '!has(self.users) || self.postgresVersion >= 15 || self.users.all(u, @@ -51568,6 +51650,53 @@ spec: type: boolean extensions: properties: + pg_tde: + properties: + enabled: + type: boolean + vault: + properties: + caSecret: + description: Name of the secret that contains the CA certificate + for SSL verification. + properties: + key: + type: string + name: + type: string + required: + - key + - name + type: object + host: + description: Host of Vault server. + type: string + mountPath: + default: secret/data + description: The mount point on the Vault server where + the key provider should store the keys. + type: string + tokenSecret: + description: Name of the secret that contains the access + token with read and write access to the mount path. + properties: + key: + type: string + name: + type: string + required: + - key + - name + type: object + required: + - host + - tokenSecret + type: object + type: object + x-kubernetes-validations: + - message: vault is required for enabling pg_tde + rule: '!has(self.enabled) || (has(self.enabled) && self.enabled + == false) || has(self.vault)' pgAudit: type: boolean pgRepack: @@ -51579,6 +51708,11 @@ spec: pgvector: type: boolean type: object + x-kubernetes-validations: + - message: to disable pg_tde first set enabled=false without removing + vault and wait for pod restarts + rule: '!has(oldSelf.pg_tde) || !has(oldSelf.pg_tde.vault) || !has(oldSelf.pg_tde.enabled) + || !oldSelf.pg_tde.enabled || has(self.pg_tde.vault)' image: description: |- The image name to use for PostgreSQL containers. When omitted, the value @@ -68883,6 +69017,10 @@ spec: description: The PostgreSQL system identifier reported by Patroni. type: string type: object + pgTDERevision: + description: Identifies the pg_tde configuration that have been installed + into PostgreSQL. + type: string pgbackrest: description: Status information for pgBackRest properties: diff --git a/deploy/cr.yaml b/deploy/cr.yaml index bc3668c299..00557408d6 100644 --- a/deploy/cr.yaml +++ b/deploy/cr.yaml @@ -765,12 +765,27 @@ spec: # disableSSL: false # secret: # name: cluster1-extensions-secret -# builtin: -# pg_stat_monitor: true -# pg_stat_statements: false -# pg_audit: true -# pgvector: false -# pg_repack: false +# pg_stat_monitor: +# enabled: true +# pg_stat_statements: +# enabled: false +# pg_audit: +# enabled: true +# pgvector: +# enabled: false +# pg_repack: +# enabled: false +# pg_tde: +# enabled: false +# vault: +# host: https://vault-service:8200 +# mountPath: tde +# tokenSecret: +# name: pg-tde-vault-secret +# key: token +# caSecret: +# name: pg-tde-vault-secret +# key: ca.crt # custom: # - name: pg_cron # version: 1.6.1 diff --git a/deploy/crd.yaml b/deploy/crd.yaml index 42aa37435c..bddf6ce6ab 100644 --- a/deploy/crd.yaml +++ b/deploy/crd.yaml @@ -14629,6 +14629,8 @@ spec: description: The specification of extensions. properties: builtin: + description: 'Deprecated: Use extensions. instead. + This field will be removed after 2.11.0.' properties: pg_audit: type: boolean @@ -14658,6 +14660,78 @@ spec: description: PullPolicy describes a policy for if/when to pull a container image type: string + pg_audit: + properties: + enabled: + type: boolean + type: object + pg_repack: + properties: + enabled: + type: boolean + type: object + pg_stat_monitor: + properties: + enabled: + type: boolean + type: object + pg_stat_statements: + properties: + enabled: + type: boolean + type: object + pg_tde: + properties: + enabled: + type: boolean + vault: + properties: + caSecret: + description: Name of the secret that contains the CA certificate + for SSL verification. + properties: + key: + type: string + name: + type: string + required: + - key + - name + type: object + host: + description: Host of Vault server. + type: string + mountPath: + default: secret/data + description: The mount point on the Vault server where + the key provider should store the keys. + type: string + tokenSecret: + description: Name of the secret that contains the access + token with read and write access to the mount path. + properties: + key: + type: string + name: + type: string + required: + - key + - name + type: object + required: + - host + - tokenSecret + type: object + type: object + x-kubernetes-validations: + - message: vault is required for enabling pg_tde + rule: '!has(self.enabled) || (has(self.enabled) && self.enabled + == false) || has(self.vault)' + pgvector: + properties: + enabled: + type: boolean + type: object storage: properties: bucket: @@ -14740,6 +14814,11 @@ spec: type: string type: object type: object + x-kubernetes-validations: + - message: to disable pg_tde first set enabled=false without removing + vault and wait for pod restarts + rule: '!has(oldSelf.pg_tde) || !has(oldSelf.pg_tde.vault) || !has(oldSelf.pg_tde.enabled) + || !oldSelf.pg_tde.enabled || has(self.pg_tde.vault)' image: description: The image name to use for PostgreSQL containers. type: string @@ -29714,6 +29793,9 @@ spec: - postgresVersion type: object x-kubernetes-validations: + - message: pg_tde is only supported for PG17 and above + rule: '!has(self.extensions) || !has(self.extensions.pg_tde) || !has(self.extensions.pg_tde.enabled) + || !self.extensions.pg_tde.enabled || self.postgresVersion >= 17' - message: PostgresVersion must be >= 15 if grantPublicSchemaAccess exists and is true rule: '!has(self.users) || self.postgresVersion >= 15 || self.users.all(u, @@ -51568,6 +51650,53 @@ spec: type: boolean extensions: properties: + pg_tde: + properties: + enabled: + type: boolean + vault: + properties: + caSecret: + description: Name of the secret that contains the CA certificate + for SSL verification. + properties: + key: + type: string + name: + type: string + required: + - key + - name + type: object + host: + description: Host of Vault server. + type: string + mountPath: + default: secret/data + description: The mount point on the Vault server where + the key provider should store the keys. + type: string + tokenSecret: + description: Name of the secret that contains the access + token with read and write access to the mount path. + properties: + key: + type: string + name: + type: string + required: + - key + - name + type: object + required: + - host + - tokenSecret + type: object + type: object + x-kubernetes-validations: + - message: vault is required for enabling pg_tde + rule: '!has(self.enabled) || (has(self.enabled) && self.enabled + == false) || has(self.vault)' pgAudit: type: boolean pgRepack: @@ -51579,6 +51708,11 @@ spec: pgvector: type: boolean type: object + x-kubernetes-validations: + - message: to disable pg_tde first set enabled=false without removing + vault and wait for pod restarts + rule: '!has(oldSelf.pg_tde) || !has(oldSelf.pg_tde.vault) || !has(oldSelf.pg_tde.enabled) + || !oldSelf.pg_tde.enabled || has(self.pg_tde.vault)' image: description: |- The image name to use for PostgreSQL containers. When omitted, the value @@ -68883,6 +69017,10 @@ spec: description: The PostgreSQL system identifier reported by Patroni. type: string type: object + pgTDERevision: + description: Identifies the pg_tde configuration that have been installed + into PostgreSQL. + type: string pgbackrest: description: Status information for pgBackRest properties: diff --git a/deploy/cw-bundle.yaml b/deploy/cw-bundle.yaml index 2330707b4a..d9fb7f02c3 100644 --- a/deploy/cw-bundle.yaml +++ b/deploy/cw-bundle.yaml @@ -14629,6 +14629,8 @@ spec: description: The specification of extensions. properties: builtin: + description: 'Deprecated: Use extensions. instead. + This field will be removed after 2.11.0.' properties: pg_audit: type: boolean @@ -14658,6 +14660,78 @@ spec: description: PullPolicy describes a policy for if/when to pull a container image type: string + pg_audit: + properties: + enabled: + type: boolean + type: object + pg_repack: + properties: + enabled: + type: boolean + type: object + pg_stat_monitor: + properties: + enabled: + type: boolean + type: object + pg_stat_statements: + properties: + enabled: + type: boolean + type: object + pg_tde: + properties: + enabled: + type: boolean + vault: + properties: + caSecret: + description: Name of the secret that contains the CA certificate + for SSL verification. + properties: + key: + type: string + name: + type: string + required: + - key + - name + type: object + host: + description: Host of Vault server. + type: string + mountPath: + default: secret/data + description: The mount point on the Vault server where + the key provider should store the keys. + type: string + tokenSecret: + description: Name of the secret that contains the access + token with read and write access to the mount path. + properties: + key: + type: string + name: + type: string + required: + - key + - name + type: object + required: + - host + - tokenSecret + type: object + type: object + x-kubernetes-validations: + - message: vault is required for enabling pg_tde + rule: '!has(self.enabled) || (has(self.enabled) && self.enabled + == false) || has(self.vault)' + pgvector: + properties: + enabled: + type: boolean + type: object storage: properties: bucket: @@ -14740,6 +14814,11 @@ spec: type: string type: object type: object + x-kubernetes-validations: + - message: to disable pg_tde first set enabled=false without removing + vault and wait for pod restarts + rule: '!has(oldSelf.pg_tde) || !has(oldSelf.pg_tde.vault) || !has(oldSelf.pg_tde.enabled) + || !oldSelf.pg_tde.enabled || has(self.pg_tde.vault)' image: description: The image name to use for PostgreSQL containers. type: string @@ -29714,6 +29793,9 @@ spec: - postgresVersion type: object x-kubernetes-validations: + - message: pg_tde is only supported for PG17 and above + rule: '!has(self.extensions) || !has(self.extensions.pg_tde) || !has(self.extensions.pg_tde.enabled) + || !self.extensions.pg_tde.enabled || self.postgresVersion >= 17' - message: PostgresVersion must be >= 15 if grantPublicSchemaAccess exists and is true rule: '!has(self.users) || self.postgresVersion >= 15 || self.users.all(u, @@ -51568,6 +51650,53 @@ spec: type: boolean extensions: properties: + pg_tde: + properties: + enabled: + type: boolean + vault: + properties: + caSecret: + description: Name of the secret that contains the CA certificate + for SSL verification. + properties: + key: + type: string + name: + type: string + required: + - key + - name + type: object + host: + description: Host of Vault server. + type: string + mountPath: + default: secret/data + description: The mount point on the Vault server where + the key provider should store the keys. + type: string + tokenSecret: + description: Name of the secret that contains the access + token with read and write access to the mount path. + properties: + key: + type: string + name: + type: string + required: + - key + - name + type: object + required: + - host + - tokenSecret + type: object + type: object + x-kubernetes-validations: + - message: vault is required for enabling pg_tde + rule: '!has(self.enabled) || (has(self.enabled) && self.enabled + == false) || has(self.vault)' pgAudit: type: boolean pgRepack: @@ -51579,6 +51708,11 @@ spec: pgvector: type: boolean type: object + x-kubernetes-validations: + - message: to disable pg_tde first set enabled=false without removing + vault and wait for pod restarts + rule: '!has(oldSelf.pg_tde) || !has(oldSelf.pg_tde.vault) || !has(oldSelf.pg_tde.enabled) + || !oldSelf.pg_tde.enabled || has(self.pg_tde.vault)' image: description: |- The image name to use for PostgreSQL containers. When omitted, the value @@ -68883,6 +69017,10 @@ spec: description: The PostgreSQL system identifier reported by Patroni. type: string type: object + pgTDERevision: + description: Identifies the pg_tde configuration that have been installed + into PostgreSQL. + type: string pgbackrest: description: Status information for pgBackRest properties: diff --git a/e2e-tests/functions b/e2e-tests/functions index ca19d5d0ce..5e0c6daf46 100644 --- a/e2e-tests/functions +++ b/e2e-tests/functions @@ -403,12 +403,29 @@ run_psql() { bash -c "printf '$command\n' | PGPASSWORD="\'$password\'" psql -v ON_ERROR_STOP=1 -t -q $uri" } +run_psql_command() { + local command=${1} + local uri=${2} + local driver=${3:-postgres} + + kubectl -n ${NAMESPACE} exec $(get_client_pod) -- \ + psql -v ON_ERROR_STOP=1 -t -q "${driver}://${uri}" -c "${command}" +} + get_psql_user_pass() { local secret_name=${1} kubectl -n ${NAMESPACE} get "secret/${secret_name}" --template='{{.data.password | base64decode}}' } +get_psql_uri() { + local cluster=$1 + local user=$2 + local secret_name="${cluster}-pguser-${user}" + + echo "${user}:$(get_psql_user_pass ${secret_name})@$(get_psql_user_host ${secret_name})" +} + get_pgbouncer_host() { local secret_name=${1} @@ -1692,4 +1709,187 @@ verify_hugepages_usage() { echo "Hugepages available but NOT being used by PostgreSQL" return 1 fi -} \ No newline at end of file +} + +function vault_tls() { + local name=${1:-vault-service} + local tmp_dir=$2 + + local service=$name + local namespace=$name + local secret_name=$name + local csr_name=vault-csr-${RANDOM} + local csr_api_ver="v1" + local csr_signer + local platform=$(detect_k8s_platform) + + echo "Detected platform: ${platform}" + + case ${platform} in + eks) + csr_signer=" signerName: beta.eks.amazonaws.com/app-serving" + ;; + *) + csr_signer=" signerName: kubernetes.io/kubelet-serving" + ;; + esac + + openssl genrsa -out ${tmp_dir}/vault.key 2048 + cat <${tmp_dir}/csr.conf +[req] +req_extensions = v3_req +distinguished_name = req_distinguished_name +[req_distinguished_name] +[ v3_req ] +basicConstraints = CA:FALSE +keyUsage = nonRepudiation, digitalSignature, keyEncipherment +extendedKeyUsage = serverAuth +subjectAltName = @alt_names +[alt_names] +DNS.1 = ${service} +DNS.2 = ${service}.${namespace} +DNS.3 = ${service}.${namespace}.svc +DNS.4 = ${service}.${namespace}.svc.cluster.local +IP.1 = 127.0.0.1 +EOF + + openssl req -new -key ${tmp_dir}/vault.key -subj "/CN=system:node:${service}.${namespace}.svc;/O=system:nodes" -out ${tmp_dir}/server.csr -config ${tmp_dir}/csr.conf + + cat <${tmp_dir}/csr.yaml +apiVersion: certificates.k8s.io/${csr_api_ver} +kind: CertificateSigningRequest +metadata: + name: ${csr_name} +spec: + groups: + - system:authenticated + request: $(cat ${tmp_dir}/server.csr | base64 | tr -d '\n') +${csr_signer} + usages: + - digital signature + - key encipherment + - server auth +EOF + + kubectl create -f ${tmp_dir}/csr.yaml + sleep 10 + kubectl certificate approve ${csr_name} + kubectl get csr ${csr_name} -o jsonpath='{.status.certificate}' >${tmp_dir}/serverCert + openssl base64 -in ${tmp_dir}/serverCert -d -A -out ${tmp_dir}/vault.crt + kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}' | base64 -d >${tmp_dir}/vault.ca + if [[ -n ${OPENSHIFT} ]]; then + if [[ "x$(kubectl get namespaces | awk '{print $1}' | grep openshift-kube-controller-manager-operator)" != "x" ]]; then + #Detecting openshift 4+ + kubectl -n openshift-kube-controller-manager-operator get secret csr-signer -o jsonpath='{.data.tls\.crt}' \ + | base64 -d >${tmp_dir}/vault.ca + else + local ca_secret_name=$(kubectl -n default get secrets \ + | grep default \ + | grep service-account-token \ + | head -n 1 \ + | awk {'print $1'}) + kubectl -n default get secret ${ca_secret_name} -o jsonpath='{.data.ca\.crt}' \ + | base64 -d >${tmp_dir}/vault.ca + fi + fi + kubectl create secret generic ${secret_name} \ + --namespace ${namespace} \ + --from-file=vault.key=${tmp_dir}/vault.key \ + --from-file=vault.crt=${tmp_dir}/vault.crt \ + --from-file=vault.ca=${tmp_dir}/vault.ca +} + +function start_vault() { + local name=${1:-vault-service} + local protocol=${2:-http} + local platform=kubernetes + local tmp_dir=$(mktemp -d) + + if [[ -n ${OPENSHIFT} ]]; then + platform=openshift + oc patch clusterrole system:auth-delegator --type='json' -p '[{"op":"add","path":"/rules/-", "value":{"apiGroups":["security.openshift.io"], "attributeRestrictions":null, "resourceNames": ["privileged"], "resources":["securitycontextconstraints"],"verbs":["use"]}}]' + local extra_args="--set server.image.repository=docker.io/hashicorp/vault --set injector.image.repository=docker.io/hashicorp/vault-k8s" + fi + + create_namespace "$name" "skip_clean" + helm repo add hashicorp https://helm.releases.hashicorp.com + helm uninstall "$name" || : + + echo "install Vault $name" + + if [ $protocol == "https" ]; then + vault_tls "${name}" ${tmp_dir} + helm install $name hashicorp/vault \ + --disable-openapi-validation \ + --version $VAULT_VER \ + --namespace "$name" \ + --set global.tlsDisable=false \ + --set global.platform="${platform}" \ + --set server.dataStorage.enabled=false \ + --set server.standalone.enabled=true \ + --set server.ha.raft.enabled=false \ + --set server.extraVolumes[0].type=secret \ + --set server.extraVolumes[0].name=$name \ + --set server.extraEnvironmentVars.VAULT_CACERT=/vault/userconfig/$name/vault.ca \ + $extra_args \ + --set server.standalone.config=" \ +listener \"tcp\" { + address = \"[::]:8200\" + cluster_address = \"[::]:8201\" + tls_cert_file = \"/vault/userconfig/$name/vault.crt\" + tls_key_file = \"/vault/userconfig/$name/vault.key\" + tls_client_ca_file = \"/vault/userconfig/$name/vault.ca\" +} + +storage \"file\" { + path = \"/vault/data\" +}" + + else + helm install $name hashicorp/vault \ + --disable-openapi-validation \ + --version $VAULT_VER \ + --namespace "$name" \ + --set server.dataStorage.enabled=false \ + --set server.standalone.enabled=true \ + --set server.ha.raft.enabled=false \ + $extra_args \ + --set global.platform="${platform}" + fi + + if [[ -n ${OPENSHIFT} ]]; then + oc patch clusterrole $name-agent-injector-clusterrole --type='json' -p '[{"op":"add","path":"/rules/-", "value":{"apiGroups":["security.openshift.io"], "attributeRestrictions":null, "resourceNames": ["privileged"], "resources":["securitycontextconstraints"],"verbs":["use"]}}]' + oc adm policy add-scc-to-user privileged $name-agent-injector + fi + + set +o xtrace + local retry=0 + echo -n pod/$name-0 + until kubectl -n ${name} get pod/$name-0 -o 'jsonpath={.status.containerStatuses[0].state}' 2>/dev/null | grep 'running'; do + echo -n . + sleep 1 + let retry+=1 + if [ "$retry" -ge 480 ]; then + kubectl -n ${name} describe pod/$name-0 + kubectl -n ${name} logs $name-0 + echo max retry count "$retry" reached. something went wrong with vault + exit 1 + fi + done + + kubectl -n ${name} exec -it $name-0 -- vault operator init -tls-skip-verify -key-shares=1 -key-threshold=1 -format=json >"$tmp_dir/$name" + local unsealKey=$(jq -r ".unseal_keys_b64[]" <"$tmp_dir/$name") + local token=$(jq -r ".root_token" <"$tmp_dir/$name") + sleep 10 + + kubectl -n ${name} exec -it $name-0 -- vault operator unseal -tls-skip-verify "$unsealKey" + kubectl -n ${name} exec -it $name-0 -- \ + sh -c "export VAULT_TOKEN=$token && export VAULT_LOG_LEVEL=trace \ + && vault secrets enable --version=2 -path=tde kv \ + && vault audit enable file file_path=/vault/vault-audit.log" + sleep 10 + + kubectl -n "${NAMESPACE}" create secret generic vault-secret \ + --from-literal=token=${token} \ + --from-file=ca.crt=${tmp_dir}/vault.ca +} diff --git a/e2e-tests/run-pr.csv b/e2e-tests/run-pr.csv index cf6e14ac09..03ea293969 100644 --- a/e2e-tests/run-pr.csv +++ b/e2e-tests/run-pr.csv @@ -17,6 +17,7 @@ monitoring monitoring-pmm3 one-pod operator-self-healing +pg-tde pitr scaling scheduled-backup diff --git a/e2e-tests/run-release.csv b/e2e-tests/run-release.csv index 422c7b93ec..a709c3728b 100644 --- a/e2e-tests/run-release.csv +++ b/e2e-tests/run-release.csv @@ -18,6 +18,7 @@ monitoring monitoring-pmm3 one-pod operator-self-healing +pg-tde pitr scaling scheduled-backup diff --git a/e2e-tests/tests/builtin-extensions/00-deploy-operator.yaml b/e2e-tests/tests/builtin-extensions/00-deploy-operator.yaml index 96329aabb8..ae4a2419aa 100644 --- a/e2e-tests/tests/builtin-extensions/00-deploy-operator.yaml +++ b/e2e-tests/tests/builtin-extensions/00-deploy-operator.yaml @@ -1,6 +1,5 @@ apiVersion: kuttl.dev/v1beta1 kind: TestStep -timeout: 10 commands: - script: |- set -o errexit @@ -13,3 +12,4 @@ commands: deploy_client deploy_s3_secrets deploy_minio + timeout: 120 diff --git a/e2e-tests/tests/builtin-extensions/03-install-all-ext.yaml b/e2e-tests/tests/builtin-extensions/03-install-all-ext.yaml index 4446a35e36..2ff90b064a 100644 --- a/e2e-tests/tests/builtin-extensions/03-install-all-ext.yaml +++ b/e2e-tests/tests/builtin-extensions/03-install-all-ext.yaml @@ -11,9 +11,13 @@ spec: pgaudit.log_level: 'warning' logging_collector: 'off' extensions: - builtin: - pg_stat_monitor: true - pg_stat_statements: true - pg_audit: true - pgvector: true - pg_repack: true + pg_stat_monitor: + enabled: true + pg_stat_statements: + enabled: true + pg_audit: + enabled: true + pgvector: + enabled: true + pg_repack: + enabled: true diff --git a/e2e-tests/tests/builtin-extensions/06-uninstall-all-ext.yaml b/e2e-tests/tests/builtin-extensions/06-uninstall-all-ext.yaml index 8321692d31..df91b78564 100644 --- a/e2e-tests/tests/builtin-extensions/06-uninstall-all-ext.yaml +++ b/e2e-tests/tests/builtin-extensions/06-uninstall-all-ext.yaml @@ -4,9 +4,13 @@ metadata: name: builtin-extensions spec: extensions: - builtin: - pg_stat_monitor: false - pg_stat_statements: false - pg_audit: false - pgvector: false - pg_repack: false + pg_stat_monitor: + enabled: false + pg_stat_statements: + enabled: false + pg_audit: + enabled: false + pgvector: + enabled: false + pg_repack: + enabled: false diff --git a/e2e-tests/tests/custom-extensions/00-deploy-operator.yaml b/e2e-tests/tests/custom-extensions/00-deploy-operator.yaml index 0cfe9bbd0e..38092cab34 100644 --- a/e2e-tests/tests/custom-extensions/00-deploy-operator.yaml +++ b/e2e-tests/tests/custom-extensions/00-deploy-operator.yaml @@ -1,6 +1,5 @@ apiVersion: kuttl.dev/v1beta1 kind: TestStep -timeout: 10 commands: - script: |- set -o errexit @@ -14,3 +13,4 @@ commands: deploy_s3_secrets deploy_minio copy_custom_extensions_form_aws + timeout: 120 diff --git a/e2e-tests/tests/pg-tde/00-assert.yaml b/e2e-tests/tests/pg-tde/00-assert.yaml new file mode 100644 index 0000000000..ae5a062d84 --- /dev/null +++ b/e2e-tests/tests/pg-tde/00-assert.yaml @@ -0,0 +1,24 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +timeout: 120 +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: perconapgclusters.pgv2.percona.com +spec: + group: pgv2.percona.com + names: + kind: PerconaPGCluster + listKind: PerconaPGClusterList + plural: perconapgclusters + singular: perconapgcluster + scope: Namespaced +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +metadata: + name: check-operator-deploy-status +timeout: 120 +commands: + - script: kubectl assert exist-enhanced deployment percona-postgresql-operator -n ${OPERATOR_NS:-$NAMESPACE} --field-selector status.readyReplicas=1 diff --git a/e2e-tests/tests/pg-tde/00-deploy-operator.yaml b/e2e-tests/tests/pg-tde/00-deploy-operator.yaml new file mode 100644 index 0000000000..1aaca58be2 --- /dev/null +++ b/e2e-tests/tests/pg-tde/00-deploy-operator.yaml @@ -0,0 +1,13 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +timeout: 10 +commands: + - script: |- + set -o errexit + set -o xtrace + + source ../../functions + init_temp_dir # do this only in the first TestStep + + deploy_operator + deploy_client diff --git a/e2e-tests/tests/pg-tde/01-assert.yaml b/e2e-tests/tests/pg-tde/01-assert.yaml new file mode 100644 index 0000000000..432369127d --- /dev/null +++ b/e2e-tests/tests/pg-tde/01-assert.yaml @@ -0,0 +1,8 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +timeout: 300 +--- +apiVersion: v1 +kind: Secret +metadata: + name: vault-secret diff --git a/e2e-tests/tests/pg-tde/01-deploy-vault.yaml b/e2e-tests/tests/pg-tde/01-deploy-vault.yaml new file mode 100644 index 0000000000..d7127630f6 --- /dev/null +++ b/e2e-tests/tests/pg-tde/01-deploy-vault.yaml @@ -0,0 +1,11 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + - script: |- + set -o errexit + set -o xtrace + + source ../../functions + + start_vault vault-service https + timeout: 600 diff --git a/e2e-tests/tests/pg-tde/02-assert.yaml b/e2e-tests/tests/pg-tde/02-assert.yaml new file mode 100644 index 0000000000..0115388257 --- /dev/null +++ b/e2e-tests/tests/pg-tde/02-assert.yaml @@ -0,0 +1,126 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +timeout: 300 +--- +kind: StatefulSet +apiVersion: apps/v1 +metadata: + labels: + postgres-operator.crunchydata.com/cluster: pg-tde + postgres-operator.crunchydata.com/data: postgres + postgres-operator.crunchydata.com/instance-set: instance1 + ownerReferences: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + kind: PostgresCluster + name: pg-tde + controller: true + blockOwnerDeletion: true +spec: + template: + metadata: + annotations: + pgv2.percona.com/tde-installed: "true" + spec: + containers: + - name: database + volumeMounts: + - mountPath: /pgconf/tls + name: cert-volume + readOnly: true + - mountPath: /pgdata + name: postgres-data + - mountPath: /etc/database-containerinfo + name: database-containerinfo + readOnly: true + - mountPath: /pgconf/tde + name: pg-tde + readOnly: true + - mountPath: /etc/pgbackrest/conf.d + name: pgbackrest-config + readOnly: true + - mountPath: /etc/patroni + name: patroni-config + readOnly: true + - mountPath: /opt/crunchy + name: crunchy-bin + - mountPath: /tmp + name: tmp + - mountPath: /dev/shm + name: dshm + - name: replication-cert-copy + - name: pgbackrest + - name: pgbackrest-config + volumes: + - name: cert-volume + - name: postgres-data + - name: database-containerinfo + - name: pg-tde + projected: + defaultMode: 384 + sources: + - secret: + items: + - key: token + path: token + name: vault-secret + - secret: + items: + - key: ca.crt + path: ca.crt + name: vault-secret + - name: pgbackrest-server + - name: pgbackrest-config + - name: patroni-config + - name: crunchy-bin + - name: tmp + - name: dshm +status: + observedGeneration: 1 + replicas: 1 + readyReplicas: 1 +--- +apiVersion: pgv2.percona.com/v2 +kind: PerconaPGCluster +metadata: + name: pg-tde +status: + state: ready + conditions: + - type: ReadyForBackup + status: "True" + - type: PGBackRestRepoHostReady + status: "True" + - type: PGBackRestReplicaRepoReady + status: "True" + - type: PGBackRestReplicaCreate + status: "True" + - type: ProxyAvailable + status: "True" + - message: pg_tde is enabled in PerconaPGCluster + observedGeneration: 1 + reason: Enabled + status: "True" + type: PGTDEEnabled +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: pg-tde +status: + pgTDERevision: 9f64d9447 +--- +kind: Job +apiVersion: batch/v1 +metadata: + labels: + postgres-operator.crunchydata.com/cluster: pg-tde + postgres-operator.crunchydata.com/pgbackrest: '' + postgres-operator.crunchydata.com/pgbackrest-backup: replica-create + postgres-operator.crunchydata.com/pgbackrest-repo: repo1 + ownerReferences: + - apiVersion: pgv2.percona.com/v2 + kind: PerconaPGBackup + controller: true + blockOwnerDeletion: true +status: + succeeded: 1 diff --git a/e2e-tests/tests/pg-tde/02-create-cluster.yaml b/e2e-tests/tests/pg-tde/02-create-cluster.yaml new file mode 100644 index 0000000000..c9a49ba290 --- /dev/null +++ b/e2e-tests/tests/pg-tde/02-create-cluster.yaml @@ -0,0 +1,19 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +timeout: 10 +commands: + - script: |- + set -o errexit + set -o xtrace + + source ../../functions + + get_cr \ + | yq '.spec.extensions.pg_tde.enabled = true' \ + | yq '.spec.extensions.pg_tde.vault.host = "https://vault-service.vault-service.svc:8200"' \ + | yq '.spec.extensions.pg_tde.vault.mountPath = "tde"' \ + | yq '.spec.extensions.pg_tde.vault.tokenSecret.name = "vault-secret"' \ + | yq '.spec.extensions.pg_tde.vault.tokenSecret.key = "token"' \ + | yq '.spec.extensions.pg_tde.vault.caSecret.name = "vault-secret"' \ + | yq '.spec.extensions.pg_tde.vault.caSecret.key = "ca.crt"' \ + | kubectl -n "${NAMESPACE}" apply -f - diff --git a/e2e-tests/tests/pg-tde/03-write-data.yaml b/e2e-tests/tests/pg-tde/03-write-data.yaml new file mode 100644 index 0000000000..36ee115a06 --- /dev/null +++ b/e2e-tests/tests/pg-tde/03-write-data.yaml @@ -0,0 +1,17 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +timeout: 60 +commands: + - script: |- + set -o errexit + set -o xtrace + + source ../../functions + + run_psql_local \ + 'CREATE DATABASE myapp; \c myapp \\\ CREATE TABLE myTable (id int PRIMARY KEY) USING tde_heap;' \ + "$(get_psql_uri pg-tde postgres)" + + run_psql_local \ + '\c myapp \\\ INSERT INTO myTable (id) VALUES (100500)' \ + "$(get_psql_uri pg-tde postgres)" diff --git a/e2e-tests/tests/pg-tde/04-assert.yaml b/e2e-tests/tests/pg-tde/04-assert.yaml new file mode 100644 index 0000000000..1aa25a7be8 --- /dev/null +++ b/e2e-tests/tests/pg-tde/04-assert.yaml @@ -0,0 +1,17 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +timeout: 30 +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: 05-verify-extension +data: + pg_tde_extension: ' pg_tde' +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: 05-verify-encryption +data: + pg_tde_is_encrypted: ' t' diff --git a/e2e-tests/tests/pg-tde/04-verify-encryption.yaml b/e2e-tests/tests/pg-tde/04-verify-encryption.yaml new file mode 100644 index 0000000000..b8bf2b47ca --- /dev/null +++ b/e2e-tests/tests/pg-tde/04-verify-encryption.yaml @@ -0,0 +1,23 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +timeout: 60 +commands: + - script: |- + set -o xtrace + + source ../../functions + + result=$(run_psql_command \ + "SELECT extname FROM pg_extension WHERE extname = 'pg_tde';" \ + "$(get_psql_uri pg-tde postgres)") + kubectl -n "${NAMESPACE}" create configmap 05-verify-extension --from-literal=pg_tde_extension="$result" + + result=$(run_psql_command \ + "SELECT pg_tde_is_encrypted('myTable');" \ + "$(get_psql_uri pg-tde postgres)/myapp") + kubectl -n "${NAMESPACE}" create configmap 05-verify-encryption --from-literal=pg_tde_is_encrypted="$result" + + # pg_tde_verify_key will throw an error if it fails + run_psql_command \ + "SELECT pg_tde_verify_key();" \ + "$(get_psql_uri pg-tde postgres)/myapp" diff --git a/e2e-tests/tests/pg-tde/05-assert.yaml b/e2e-tests/tests/pg-tde/05-assert.yaml new file mode 100644 index 0000000000..f90d456b16 --- /dev/null +++ b/e2e-tests/tests/pg-tde/05-assert.yaml @@ -0,0 +1,31 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +timeout: 660 +--- +kind: Job +apiVersion: batch/v1 +metadata: + annotations: + postgres-operator.crunchydata.com/pgbackrest-backup: backup1 + labels: + postgres-operator.crunchydata.com/pgbackrest-backup: manual + postgres-operator.crunchydata.com/pgbackrest-repo: repo1 + ownerReferences: + - apiVersion: pgv2.percona.com/v2 + kind: PerconaPGBackup + controller: true + blockOwnerDeletion: true +status: + succeeded: 1 +--- +apiVersion: pgv2.percona.com/v2 +kind: PerconaPGBackup +metadata: + name: backup1 +spec: + pgCluster: pg-tde + repoName: repo1 + options: + - --type=full +status: + state: Succeeded diff --git a/e2e-tests/tests/pg-tde/05-create-backup.yaml b/e2e-tests/tests/pg-tde/05-create-backup.yaml new file mode 100644 index 0000000000..1789b2a9a1 --- /dev/null +++ b/e2e-tests/tests/pg-tde/05-create-backup.yaml @@ -0,0 +1,9 @@ +apiVersion: pgv2.percona.com/v2 +kind: PerconaPGBackup +metadata: + name: backup1 +spec: + pgCluster: pg-tde + repoName: repo1 + options: + - --type=full diff --git a/e2e-tests/tests/pg-tde/06-write-data.yaml b/e2e-tests/tests/pg-tde/06-write-data.yaml new file mode 100644 index 0000000000..5d31274bf4 --- /dev/null +++ b/e2e-tests/tests/pg-tde/06-write-data.yaml @@ -0,0 +1,15 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +timeout: 60 +commands: + - script: |- + set -o errexit + set -o xtrace + + source ../../functions + + run_psql_local \ + '\c myapp \\\ INSERT INTO myTable (id) VALUES (100501)' \ + "$(get_psql_uri pg-tde postgres)" + + sleep 5 diff --git a/e2e-tests/tests/pg-tde/07-assert.yaml b/e2e-tests/tests/pg-tde/07-assert.yaml new file mode 100644 index 0000000000..4f6abe1437 --- /dev/null +++ b/e2e-tests/tests/pg-tde/07-assert.yaml @@ -0,0 +1,30 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +timeout: 240 +--- +apiVersion: pgv2.percona.com/v2 +kind: PerconaPGCluster +metadata: + name: pg-tde +status: + pgbouncer: + ready: 3 + size: 3 + postgres: + instances: + - name: instance1 + ready: 3 + size: 3 + ready: 3 + size: 3 + state: ready +--- +apiVersion: pgv2.percona.com/v2 +kind: PerconaPGRestore +metadata: + name: restore1 +spec: + pgCluster: pg-tde + repoName: repo1 +status: + state: Succeeded diff --git a/e2e-tests/tests/pg-tde/07-create-restore.yaml b/e2e-tests/tests/pg-tde/07-create-restore.yaml new file mode 100644 index 0000000000..8a666422a4 --- /dev/null +++ b/e2e-tests/tests/pg-tde/07-create-restore.yaml @@ -0,0 +1,7 @@ +apiVersion: pgv2.percona.com/v2 +kind: PerconaPGRestore +metadata: + name: restore1 +spec: + pgCluster: pg-tde + repoName: repo1 diff --git a/e2e-tests/tests/pg-tde/08-assert.yaml b/e2e-tests/tests/pg-tde/08-assert.yaml new file mode 100644 index 0000000000..4d8c494e75 --- /dev/null +++ b/e2e-tests/tests/pg-tde/08-assert.yaml @@ -0,0 +1,30 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +timeout: 30 +--- +kind: ConfigMap +apiVersion: v1 +metadata: + name: 08-read-from-primary +data: + data: |2- + 100500 + 100501 +--- +kind: ConfigMap +apiVersion: v1 +metadata: + name: 08-read-from-replica-1 +data: + data: |2- + 100500 + 100501 +--- +kind: ConfigMap +apiVersion: v1 +metadata: + name: 08-read-from-replica-2 +data: + data: |2- + 100500 + 100501 \ No newline at end of file diff --git a/e2e-tests/tests/pg-tde/08-read-data.yaml b/e2e-tests/tests/pg-tde/08-read-data.yaml new file mode 100644 index 0000000000..eae4c8b930 --- /dev/null +++ b/e2e-tests/tests/pg-tde/08-read-data.yaml @@ -0,0 +1,22 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +timeout: 30 +commands: + - script: |- + set -o errexit + set -o xtrace + + source ../../functions + + primary=$(get_pod_by_role pg-tde primary name) + echo "Primary pod: ${primary}" + data=$(kubectl exec ${primary} -n "${NAMESPACE}" -- bash -c 'psql -q -t -d myapp -c "SELECT * from myTable;"') + kubectl create configmap -n "${NAMESPACE}" 08-read-from-primary --from-literal=data="${data}" + + t=1 + for i in $(kubectl get pods -n "${NAMESPACE}" -l postgres-operator.crunchydata.com/cluster=pg-tde,postgres-operator.crunchydata.com/role=replica -o jsonpath='{.items[*].metadata.name}'); do + echo "Replica pod: ${i}" + data=$(kubectl exec ${i} -n "${NAMESPACE}" -- bash -c 'psql -q -t -d myapp -c "SELECT * from myTable;"') + kubectl create configmap -n "${NAMESPACE}" 08-read-from-replica-${t} --from-literal=data="${data}" + t=$((t+1)) + done diff --git a/e2e-tests/tests/pg-tde/09-assert.yaml b/e2e-tests/tests/pg-tde/09-assert.yaml new file mode 100644 index 0000000000..0b0011df31 --- /dev/null +++ b/e2e-tests/tests/pg-tde/09-assert.yaml @@ -0,0 +1,110 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +timeout: 300 +--- +kind: StatefulSet +apiVersion: apps/v1 +metadata: + labels: + postgres-operator.crunchydata.com/cluster: pg-tde + postgres-operator.crunchydata.com/data: postgres + postgres-operator.crunchydata.com/instance-set: instance1 + ownerReferences: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + kind: PostgresCluster + name: pg-tde + controller: true + blockOwnerDeletion: true +spec: + template: + spec: + containers: + - name: database + volumeMounts: + - mountPath: /pgconf/tls + name: cert-volume + readOnly: true + - mountPath: /pgdata + name: postgres-data + - mountPath: /etc/database-containerinfo + name: database-containerinfo + readOnly: true + - mountPath: /pgconf/tde + name: pg-tde + readOnly: true + - mountPath: /etc/pgbackrest/conf.d + name: pgbackrest-config + readOnly: true + - mountPath: /etc/patroni + name: patroni-config + readOnly: true + - mountPath: /opt/crunchy + name: crunchy-bin + - mountPath: /tmp + name: tmp + - mountPath: /dev/shm + name: dshm + - name: replication-cert-copy + - name: pgbackrest + - name: pgbackrest-config + volumes: + - name: cert-volume + - name: postgres-data + - name: database-containerinfo + - name: pg-tde + projected: + defaultMode: 384 + sources: + - secret: + items: + - key: token + path: token + name: vault-secret-rotated + - secret: + items: + - key: ca.crt + path: ca.crt + name: vault-secret-rotated + - name: pgbackrest-server + - name: pgbackrest-config + - name: patroni-config + - name: crunchy-bin + - name: tmp + - name: dshm +status: + observedGeneration: 3 + replicas: 1 + readyReplicas: 1 +--- +apiVersion: pgv2.percona.com/v2 +kind: PerconaPGCluster +metadata: + name: pg-tde +status: + state: ready + conditions: + - type: ReadyForBackup + status: "True" + - type: PGBackRestRepoHostReady + status: "True" + - type: PGBackRestReplicaRepoReady + status: "True" + - type: PGBackRestReplicaCreate + status: "True" + - type: ProxyAvailable + status: "True" + - message: pg_tde is enabled in PerconaPGCluster + reason: Enabled + status: "True" + type: PGTDEEnabled + - type: PGBackRestoreProgressing + status: "True" + - type: PostgresDataInitialized + status: "True" +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: pg-tde +status: + pgTDERevision: 85f4c65f59 diff --git a/e2e-tests/tests/pg-tde/09-change-vault-provider.yaml b/e2e-tests/tests/pg-tde/09-change-vault-provider.yaml new file mode 100644 index 0000000000..6c48696495 --- /dev/null +++ b/e2e-tests/tests/pg-tde/09-change-vault-provider.yaml @@ -0,0 +1,31 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +timeout: 120 +commands: + - script: |- + set -o errexit + set -o xtrace + + source ../../functions + + vault_name=vault-service + + old_token=$(kubectl -n "${NAMESPACE}" get secret vault-secret -o jsonpath='{.data.token}' | base64 -d) + new_token=$(kubectl -n ${vault_name} exec ${vault_name}-0 -- \ + sh -c "VAULT_TOKEN=${old_token} vault token create -tls-skip-verify -format=json" | jq -r '.auth.client_token') + + ca_crt=$(kubectl -n "${NAMESPACE}" get secret vault-secret -o jsonpath='{.data.ca\.crt}') + + kubectl -n "${NAMESPACE}" create secret generic vault-secret-rotated \ + --from-literal=token=${new_token} \ + --from-file=ca.crt=<(echo "${ca_crt}" | base64 -d) + + get_cr \ + | yq '.spec.extensions.pg_tde.enabled = true' \ + | yq '.spec.extensions.pg_tde.vault.host = "https://vault-service.vault-service.svc:8200"' \ + | yq '.spec.extensions.pg_tde.vault.mountPath = "tde"' \ + | yq '.spec.extensions.pg_tde.vault.tokenSecret.name = "vault-secret-rotated"' \ + | yq '.spec.extensions.pg_tde.vault.tokenSecret.key = "token"' \ + | yq '.spec.extensions.pg_tde.vault.caSecret.name = "vault-secret-rotated"' \ + | yq '.spec.extensions.pg_tde.vault.caSecret.key = "ca.crt"' \ + | kubectl -n "${NAMESPACE}" apply -f - diff --git a/e2e-tests/tests/pg-tde/10-assert.yaml b/e2e-tests/tests/pg-tde/10-assert.yaml new file mode 100644 index 0000000000..01d608276f --- /dev/null +++ b/e2e-tests/tests/pg-tde/10-assert.yaml @@ -0,0 +1,12 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +timeout: 30 +--- +kind: ConfigMap +apiVersion: v1 +metadata: + name: 10-read-after-change +data: + data: |2- + 100500 + 100501 diff --git a/e2e-tests/tests/pg-tde/10-verify-after-change.yaml b/e2e-tests/tests/pg-tde/10-verify-after-change.yaml new file mode 100644 index 0000000000..16bc38f779 --- /dev/null +++ b/e2e-tests/tests/pg-tde/10-verify-after-change.yaml @@ -0,0 +1,27 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +timeout: 60 +commands: + - script: |- + set -o errexit + set -o xtrace + + source ../../functions + + primary=$(get_pod_by_role pg-tde primary name) + data=$(kubectl exec ${primary} -n "${NAMESPACE}" -- bash -c 'psql -q -t -d myapp -c "SELECT * from myTable;"') + kubectl create configmap -n "${NAMESPACE}" 10-read-after-change --from-literal=data="${data}" + + run_psql_command \ + "SELECT pg_tde_verify_key();" \ + "$(get_psql_uri pg-tde postgres)/myapp" + + # Verify phase 2 cleanup: temp credential files should not exist on /pgdata + if kubectl exec ${primary} -n "${NAMESPACE}" -- test -f /pgdata/tde-new-token; then + echo "ERROR: /pgdata/tde-new-token should have been cleaned up after phase 2" + exit 1 + fi + if kubectl exec ${primary} -n "${NAMESPACE}" -- test -f /pgdata/tde-new-ca.crt; then + echo "ERROR: /pgdata/tde-new-ca.crt should have been cleaned up after phase 2" + exit 1 + fi diff --git a/e2e-tests/tests/pg-tde/11-assert.yaml b/e2e-tests/tests/pg-tde/11-assert.yaml new file mode 100644 index 0000000000..ddcb0ea86d --- /dev/null +++ b/e2e-tests/tests/pg-tde/11-assert.yaml @@ -0,0 +1,103 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +timeout: 180 +--- +kind: StatefulSet +apiVersion: apps/v1 +metadata: + labels: + postgres-operator.crunchydata.com/cluster: pg-tde + postgres-operator.crunchydata.com/data: postgres + postgres-operator.crunchydata.com/instance-set: instance1 + ownerReferences: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + kind: PostgresCluster + name: pg-tde + controller: true + blockOwnerDeletion: true +spec: + template: + spec: + containers: + - name: database + volumeMounts: + - mountPath: /pgconf/tls + name: cert-volume + readOnly: true + - mountPath: /pgdata + name: postgres-data + - mountPath: /etc/database-containerinfo + name: database-containerinfo + readOnly: true + - mountPath: /pgconf/tde + name: pg-tde + readOnly: true + - mountPath: /etc/pgbackrest/conf.d + name: pgbackrest-config + readOnly: true + - mountPath: /etc/patroni + name: patroni-config + readOnly: true + - mountPath: /opt/crunchy + name: crunchy-bin + - mountPath: /tmp + name: tmp + - mountPath: /dev/shm + name: dshm + - name: replication-cert-copy + - name: pgbackrest + - name: pgbackrest-config + volumes: + - name: cert-volume + - name: postgres-data + - name: database-containerinfo + - name: pg-tde + projected: + defaultMode: 384 + sources: + - secret: + items: + - key: token + path: token + name: vault-secret-rotated + - secret: + items: + - key: ca.crt + path: ca.crt + name: vault-secret-rotated + - name: pgbackrest-server + - name: pgbackrest-config + - name: patroni-config + - name: crunchy-bin + - name: tmp + - name: dshm +status: + observedGeneration: 4 + replicas: 1 + readyReplicas: 1 +--- +apiVersion: pgv2.percona.com/v2 +kind: PerconaPGCluster +metadata: + name: pg-tde +status: + state: ready + conditions: + - type: ReadyForBackup + status: "True" + - type: PGBackRestRepoHostReady + status: "True" + - type: PGBackRestReplicaRepoReady + status: "True" + - type: PGBackRestReplicaCreate + status: "True" + - type: ProxyAvailable + status: "True" + - message: pg_tde is disabled in PerconaPGCluster + reason: Disabled + status: "False" + type: PGTDEEnabled + - type: PGBackRestoreProgressing + status: "True" + - type: PostgresDataInitialized + status: "True" diff --git a/e2e-tests/tests/pg-tde/11-disable-pgtde.yaml b/e2e-tests/tests/pg-tde/11-disable-pgtde.yaml new file mode 100644 index 0000000000..828b745f8d --- /dev/null +++ b/e2e-tests/tests/pg-tde/11-disable-pgtde.yaml @@ -0,0 +1,27 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +timeout: 60 +commands: + - script: |- + set -o xtrace + + source ../../functions + + # pg_tde requires all encrypted objects to be dropped first + run_psql_command \ + "DROP TABLE mytable;" \ + "$(get_psql_uri pg-tde postgres)/myapp" + + run_psql_command \ + "CHECKPOINT;" \ + "$(get_psql_uri pg-tde postgres)/postgres" + + get_cr \ + | yq '.spec.extensions.pg_tde.enabled = false' \ + | yq '.spec.extensions.pg_tde.vault.host = "https://vault-service.vault-service.svc:8200"' \ + | yq '.spec.extensions.pg_tde.vault.mountPath = "tde"' \ + | yq '.spec.extensions.pg_tde.vault.tokenSecret.name = "vault-secret-rotated"' \ + | yq '.spec.extensions.pg_tde.vault.tokenSecret.key = "token"' \ + | yq '.spec.extensions.pg_tde.vault.caSecret.name = "vault-secret-rotated"' \ + | yq '.spec.extensions.pg_tde.vault.caSecret.key = "ca.crt"' \ + | kubectl -n "${NAMESPACE}" apply -f - diff --git a/e2e-tests/tests/pg-tde/12-assert.yaml b/e2e-tests/tests/pg-tde/12-assert.yaml new file mode 100644 index 0000000000..b690c36589 --- /dev/null +++ b/e2e-tests/tests/pg-tde/12-assert.yaml @@ -0,0 +1,86 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +timeout: 300 +--- +kind: StatefulSet +apiVersion: apps/v1 +metadata: + labels: + postgres-operator.crunchydata.com/cluster: pg-tde + postgres-operator.crunchydata.com/data: postgres + postgres-operator.crunchydata.com/instance-set: instance1 + ownerReferences: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + kind: PostgresCluster + name: pg-tde + controller: true + blockOwnerDeletion: true +spec: + template: + spec: + containers: + - name: database + volumeMounts: + - mountPath: /pgconf/tls + name: cert-volume + readOnly: true + - mountPath: /pgdata + name: postgres-data + - mountPath: /etc/database-containerinfo + name: database-containerinfo + readOnly: true + - mountPath: /etc/pgbackrest/conf.d + name: pgbackrest-config + readOnly: true + - mountPath: /etc/patroni + name: patroni-config + readOnly: true + - mountPath: /opt/crunchy + name: crunchy-bin + - mountPath: /tmp + name: tmp + - mountPath: /dev/shm + name: dshm + - name: replication-cert-copy + - name: pgbackrest + - name: pgbackrest-config + volumes: + - name: cert-volume + - name: postgres-data + - name: database-containerinfo + - name: pgbackrest-server + - name: pgbackrest-config + - name: patroni-config + - name: crunchy-bin + - name: tmp + - name: dshm +status: + observedGeneration: 5 + replicas: 1 + readyReplicas: 1 +--- +apiVersion: pgv2.percona.com/v2 +kind: PerconaPGCluster +metadata: + name: pg-tde +status: + state: ready + conditions: + - type: ReadyForBackup + status: "True" + - type: PGBackRestRepoHostReady + status: "True" + - type: PGBackRestReplicaRepoReady + status: "True" + - type: PGBackRestReplicaCreate + status: "True" + - type: ProxyAvailable + status: "True" + - message: pg_tde is disabled in PerconaPGCluster + reason: Disabled + status: "False" + type: PGTDEEnabled + - type: PGBackRestoreProgressing + status: "True" + - type: PostgresDataInitialized + status: "True" diff --git a/e2e-tests/tests/pg-tde/12-remove-pgtde-config.yaml b/e2e-tests/tests/pg-tde/12-remove-pgtde-config.yaml new file mode 100644 index 0000000000..a0cb9a9c23 --- /dev/null +++ b/e2e-tests/tests/pg-tde/12-remove-pgtde-config.yaml @@ -0,0 +1,12 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +timeout: 60 +commands: + - script: |- + set -o xtrace + + source ../../functions + + get_cr \ + | yq 'del(.spec.extensions.pg_tde)' \ + | kubectl -n "${NAMESPACE}" apply -f - diff --git a/e2e-tests/tests/upgrade-minor/05-sleep-after-operator-update.yaml b/e2e-tests/tests/upgrade-minor/05-sleep-after-operator-update.yaml index 506a371d4e..c850e7e08f 100644 --- a/e2e-tests/tests/upgrade-minor/05-sleep-after-operator-update.yaml +++ b/e2e-tests/tests/upgrade-minor/05-sleep-after-operator-update.yaml @@ -1,6 +1,6 @@ apiVersion: kuttl.dev/v1beta1 kind: TestStep -timeout: 30 commands: - script: |- sleep 30 + timeout: 40 diff --git a/e2e-tests/vars.sh b/e2e-tests/vars.sh index d5f81833e4..e64f91f384 100755 --- a/e2e-tests/vars.sh +++ b/e2e-tests/vars.sh @@ -45,6 +45,7 @@ export IMAGE_PMM3_SERVER=${IMAGE_PMM3_SERVER:-"perconalab/pmm-server:3.4"} export PGOV1_TAG=${PGOV1_TAG:-"1.4.0"} export PGOV1_VER=${PGOV1_VER:-"14"} export MINIO_VER="5.4.0" +export VAULT_VER="0.32.0" # Add 'docker.io' for images that are provided without registry export REGISTRY_NAME="docker.io" diff --git a/internal/controller/postgrescluster/controller.go b/internal/controller/postgrescluster/controller.go index b5c44d221c..9ee8b8d6ea 100644 --- a/internal/controller/postgrescluster/controller.go +++ b/internal/controller/postgrescluster/controller.go @@ -43,6 +43,7 @@ import ( "github.com/percona/percona-postgresql-operator/v2/internal/pgmonitor" "github.com/percona/percona-postgresql-operator/v2/internal/pgstatmonitor" "github.com/percona/percona-postgresql-operator/v2/internal/pgstatstatements" + "github.com/percona/percona-postgresql-operator/v2/internal/pgtde" "github.com/percona/percona-postgresql-operator/v2/internal/pki" "github.com/percona/percona-postgresql-operator/v2/internal/pmm" "github.com/percona/percona-postgresql-operator/v2/internal/postgres" @@ -272,6 +273,15 @@ func (r *Reconciler) Reconcile( if cluster.Spec.Extensions.PGAudit { pgaudit.PostgreSQLParameters(&pgParameters) } + + pgTDECondition := meta.FindStatusCondition(cluster.Status.Conditions, + v1beta1.PGTDEEnabled) + pgTDEEnabled := pgTDECondition != nil && pgTDECondition.Status == metav1.ConditionTrue + // pg_tde should be removed from shared libraries only after extension is dropped + if cluster.Spec.Extensions.PGTDE.Enabled || pgTDEEnabled { + pgtde.PostgreSQLParameters(&pgParameters) + } + pgbackrest.PostgreSQL(cluster, &pgParameters, backupsSpecFound) pgmonitor.PostgreSQLParameters(cluster, &pgParameters) @@ -401,7 +411,10 @@ func (r *Reconciler) Reconcile( } if err == nil { - err = r.reconcilePostgresDatabases(ctx, cluster, instances) + err = r.reconcilePostgresDatabases(ctx, cluster, instances, patchClusterStatus) + } + if err == nil { + err = r.reconcilePGTDEProviders(ctx, cluster, instances, patchClusterStatus) } if err == nil { err = r.reconcilePostgresUsers(ctx, cluster, instances) diff --git a/internal/controller/postgrescluster/instance.go b/internal/controller/postgrescluster/instance.go index fc1d2c190c..4853bc7abd 100644 --- a/internal/controller/postgrescluster/instance.go +++ b/internal/controller/postgrescluster/instance.go @@ -18,6 +18,7 @@ import ( appsv1 "k8s.io/api/apps/v1" corev1 "k8s.io/api/core/v1" policyv1 "k8s.io/api/policy/v1" + "k8s.io/apimachinery/pkg/api/meta" "k8s.io/apimachinery/pkg/api/resource" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" @@ -34,6 +35,7 @@ import ( "github.com/percona/percona-postgresql-operator/v2/internal/naming" "github.com/percona/percona-postgresql-operator/v2/internal/patroni" "github.com/percona/percona-postgresql-operator/v2/internal/pgbackrest" + "github.com/percona/percona-postgresql-operator/v2/internal/pgtde" "github.com/percona/percona-postgresql-operator/v2/internal/pki" "github.com/percona/percona-postgresql-operator/v2/internal/postgres" "github.com/percona/percona-postgresql-operator/v2/percona/k8s" @@ -1201,6 +1203,26 @@ func (r *Reconciler) reconcileInstance( postgresDataVolume, postgresWALVolume, tablespaceVolumes, &instance.Spec.Template.Spec) + // K8SPG-911: When a vault provider change is pending (phase 1 not yet + // done), keep the old TDE volume so pods don't restart before the + // temp-file-based provider change SQL runs. After phase 1, the + // revision matches tempRevision, so we release the hold and let + // the volume update trigger a pod restart for phase 2. + if observed != nil && observed.Runner != nil && + cluster.Spec.Extensions.PGTDE.Vault != nil && + cluster.Status.PGTDERevision != "" { + vault := cluster.Spec.Extensions.PGTDE.Vault + tokenPath, caPath := pgtde.VaultCredentialPaths(vault) + standardRev, _ := pgTDEVaultRevision(vault, tokenPath, caPath) + tempTokenPath, tempCAPath := pgtde.TempVaultCredentialPaths(vault) + tempRev, _ := pgTDEVaultRevision(vault, tempTokenPath, tempCAPath) + + if cluster.Status.PGTDERevision != standardRev && + cluster.Status.PGTDERevision != tempRev { + preserveOldTDEVolume(&instance.Spec.Template.Spec, observed.Runner) + } + } + if backupsSpecFound { addPGBackRestToInstancePodSpec( ctx, cluster, instanceCertificates, &instance.Spec.Template.Spec) @@ -1322,6 +1344,18 @@ func generateInstanceStatefulSetIntent(_ context.Context, }, ) } + + pgTDECondition := meta.FindStatusCondition(cluster.Status.Conditions, + v1beta1.PGTDEEnabled) + pgTDEEnabled := pgTDECondition != nil && pgTDECondition.Status == metav1.ConditionTrue + // we should restart pods only after extension is dropped + if cluster.Spec.Extensions.PGTDE.Enabled || pgTDEEnabled { + sts.Spec.Template.Annotations = naming.Merge( + sts.Spec.Template.Annotations, + map[string]string{naming.TDEInstalledAnnotation: "true"}, + ) + } + sts.Spec.Template.Labels = naming.Merge( cluster.Spec.Metadata.GetLabelsOrNil(), spec.Metadata.GetLabelsOrNil(), @@ -1426,6 +1460,43 @@ func generateInstanceStatefulSetIntent(_ context.Context, sts.Spec.Template.Spec.ImagePullSecrets = cluster.Spec.ImagePullSecrets } +// pgTDEVaultRevision computes a hash of the vault configuration and credential +// paths for comparing with cluster.Status.PGTDERevision. +func pgTDEVaultRevision(vault *v1beta1.PGTDEVaultSpec, tokenPath, caPath string) (string, error) { + return safeHash32(func(hasher io.Writer) error { + _, err := fmt.Fprint(hasher, + vault.Host, vault.MountPath, + vault.TokenSecret.Name, vault.TokenSecret.Key, + vault.CASecret.Name, vault.CASecret.Key, + tokenPath, caPath) + return err + }) +} + +// preserveOldTDEVolume replaces the pg-tde volume in the new pod spec with +// the one from the currently running StatefulSet. This prevents pods from +// restarting with new vault credentials before the vault provider change +// SQL has been executed. +func preserveOldTDEVolume(podSpec *corev1.PodSpec, runner *appsv1.StatefulSet) { + var oldVolume *corev1.Volume + for i := range runner.Spec.Template.Spec.Volumes { + if runner.Spec.Template.Spec.Volumes[i].Name == naming.PGTDEVolume { + oldVolume = &runner.Spec.Template.Spec.Volumes[i] + break + } + } + if oldVolume == nil { + return + } + + for i := range podSpec.Volumes { + if podSpec.Volumes[i].Name == naming.PGTDEVolume { + podSpec.Volumes[i] = *oldVolume + return + } + } +} + // addPGBackRestToInstancePodSpec adds pgBackRest configurations and sidecars // to the PodSpec. func addPGBackRestToInstancePodSpec( diff --git a/internal/controller/postgrescluster/pgbackrest.go b/internal/controller/postgrescluster/pgbackrest.go index 18ccf8c3b1..e393b12f4d 100644 --- a/internal/controller/postgrescluster/pgbackrest.go +++ b/internal/controller/postgrescluster/pgbackrest.go @@ -1337,7 +1337,7 @@ func (r *Reconciler) reconcileRestoreJob(ctx context.Context, // NOTE (andrewlecuyer): Forcing users to put each argument separately might prevent the need // to do any escaping or use eval. cmd := pgbackrest.RestoreCommand(pgdata, hugePagesSetting, config.FetchKeyCommand(&cluster.Spec), - pgtablespaceVolumes, strings.Join(opts, " ")) + pgtablespaceVolumes, cluster.Spec.Extensions.PGTDE.Enabled, strings.Join(opts, " ")) // create the volume resources required for the postgres data directory dataVolumeMount := postgres.DataVolumeMount() @@ -1381,6 +1381,11 @@ func (r *Reconciler) reconcileRestoreJob(ctx context.Context, volumeMounts = append(volumeMounts, tablespaceVolumeMount) } + if vault := cluster.Spec.Extensions.PGTDE.Vault; vault != nil { + volumeMounts = append(volumeMounts, postgres.PGTDEVolumeMount()) + volumes = append(volumes, postgres.PGTDEVolume(vault)) + } + restoreJob := &batchv1.Job{} if err := r.generateRestoreJobIntent(cluster, configHash, instanceName, cmd, volumeMounts, volumes, dataSource, restoreJob); err != nil { diff --git a/internal/controller/postgrescluster/postgres.go b/internal/controller/postgrescluster/postgres.go index 792dbc0b71..de15eab1eb 100644 --- a/internal/controller/postgrescluster/postgres.go +++ b/internal/controller/postgrescluster/postgres.go @@ -28,6 +28,7 @@ import ( "k8s.io/client-go/util/retry" "sigs.k8s.io/controller-runtime/pkg/client" + "github.com/percona/percona-postgresql-operator/v2/internal/controller/runtime" "github.com/percona/percona-postgresql-operator/v2/internal/feature" "github.com/percona/percona-postgresql-operator/v2/internal/initialize" "github.com/percona/percona-postgresql-operator/v2/internal/logging" @@ -36,6 +37,7 @@ import ( "github.com/percona/percona-postgresql-operator/v2/internal/pgrepack" "github.com/percona/percona-postgresql-operator/v2/internal/pgstatmonitor" "github.com/percona/percona-postgresql-operator/v2/internal/pgstatstatements" + "github.com/percona/percona-postgresql-operator/v2/internal/pgtde" "github.com/percona/percona-postgresql-operator/v2/internal/pgvector" "github.com/percona/percona-postgresql-operator/v2/internal/postgis" "github.com/percona/percona-postgresql-operator/v2/internal/postgres" @@ -191,7 +193,10 @@ func (r *Reconciler) generatePostgresUserSecret( // reconcilePostgresDatabases creates databases inside of PostgreSQL. func (r *Reconciler) reconcilePostgresDatabases( - ctx context.Context, cluster *v1beta1.PostgresCluster, instances *observedInstances, + ctx context.Context, + cluster *v1beta1.PostgresCluster, + instances *observedInstances, + patchStatus func() error, ) error { const container = naming.ContainerDatabase var podExecutor postgres.Executor @@ -248,8 +253,8 @@ func (r *Reconciler) reconcilePostgresDatabases( } // Calculate a hash of the SQL that should be executed in PostgreSQL. - // K8SPG-375, K8SPG-577, K8SPG-699 - var pgAuditOK, pgStatMonitorOK, pgStatStatementsOK, pgvectorOK, pgRepackOK, postgisInstallOK bool + // K8SPG-375, K8SPG-577, K8SPG-699, K8SPG-911 + var pgAuditOK, pgStatMonitorOK, pgStatStatementsOK, pgvectorOK, pgRepackOK, pgTdeOK, postgisInstallOK bool create := func(ctx context.Context, exec postgres.Executor) error { // validate version string before running it in database _, err := gover.NewVersion(cluster.Labels[naming.LabelVersion]) @@ -336,6 +341,9 @@ func (r *Reconciler) reconcilePostgresDatabases( } } + // K8SPG-911 + pgTdeOK = pgtde.ReconcileExtension(ctx, exec, r.Recorder, cluster) == nil + // Enabling PostGIS extensions is a one-way operation // e.g., you can take a PostgresCluster and turn it into a PostGISCluster, // but you cannot reverse the process, as that would potentially remove an extension @@ -375,19 +383,202 @@ func (r *Reconciler) reconcilePostgresDatabases( // Apply the necessary SQL and record its hash in cluster.Status. Include // the hash in any log messages. - if err == nil { log := logging.FromContext(ctx).WithValues("revision", revision) err = errors.WithStack(create(logging.NewContext(ctx, log), podExecutor)) } + // K8SPG-472 - if err == nil && pgStatMonitorOK && pgAuditOK && pgvectorOK && postgisInstallOK && pgRepackOK { + if err == nil && + pgStatMonitorOK && + pgAuditOK && + pgvectorOK && + postgisInstallOK && + pgRepackOK && + pgTdeOK { cluster.Status.DatabaseRevision = revision + if err := patchStatus(); err != nil { + return errors.Wrap(err, "patch status") + } + } + + return err +} + +// reconcilePGTDEProviders configures pg_tde providers using a two-phase +// approach for vault credential changes: +// +// - Phase 1: The pod still mounts the OLD vault secret. Fetch the new +// credentials from the Kubernetes Secret into temp files and run +// pg_tde_change_global_key_provider_vault_v2 with those temp paths. +// Store a "temp" revision (hash includes temp paths). This releases +// the volume hold so the StatefulSet updates and pods restart. +// +// - Phase 2: After restart, the pod mounts the NEW vault secret at +// standard paths. Run pg_tde_change_global_key_provider_vault_v2 +// again with the standard mount paths so pg_tde no longer references +// temp files. Store the "standard" revision. +func (r *Reconciler) reconcilePGTDEProviders( + ctx context.Context, + cluster *v1beta1.PostgresCluster, + instances *observedInstances, + patchStatus func() error, +) error { + const container = naming.ContainerDatabase + + if !cluster.Spec.Extensions.PGTDE.Enabled || cluster.Spec.Extensions.PGTDE.Vault == nil { + cluster.Status.PGTDERevision = "" + return nil + } + + log := logging.FromContext(ctx).WithName("PGTDE") + + // Wait for all instances to match their pod templates before configuring + // the vault provider. This prevents running SQL on pods that are mid-rollout. + for _, inst := range instances.forCluster { + if matches, known := inst.PodMatchesPodTemplate(); !matches || !known { + log.V(1).Info("Waiting for instance to be updated", "instance", inst.Name) + return nil + } + } + + // Find the PostgreSQL instance that can execute SQL that writes system + // catalogs. When there is none, return early. + pod, _ := instances.writablePod(container) + if pod == nil { + return nil + } + + // We need to configure pg_tde after volumes are mounted and extension is created + if _, ok := pod.Annotations[naming.TDEInstalledAnnotation]; !ok { + return nil + } + + log = log.WithValues("pod", pod.Name) + ctx = logging.NewContext(ctx, log) + pgExecutor := func( + ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + return r.PodExec(ctx, pod.Namespace, pod.Name, container, stdin, stdout, stderr, command...) + } + + vault := cluster.Spec.Extensions.PGTDE.Vault + tokenPath, caPath := pgtde.VaultCredentialPaths(vault) + + standardRevision, err := pgTDEVaultRevision(vault, tokenPath, caPath) + if err == nil && standardRevision == cluster.Status.PGTDERevision { + return nil + } + + tempTokenPath, tempCAPath := pgtde.TempVaultCredentialPaths(vault) + tempRevision, _ := pgTDEVaultRevision(vault, tempTokenPath, tempCAPath) + + var revision string + if err == nil { + switch { + case cluster.Status.PGTDERevision == tempRevision: + // Phase 2: pod restarted with new volume mounted at standard paths. + // Change provider from temp paths to persistent mount paths, then + // clean up the temp files from /pgdata. + log.Info("finalizing vault provider change with standard mount paths") + err = errors.WithStack( + pgtde.ReconcileVaultProvider(ctx, pgExecutor, cluster, tokenPath, caPath)) + if err == nil { + cleanupTempFile(ctx, pod, container, r.PodExec, tempTokenPath) + if tempCAPath != "" { + cleanupTempFile(ctx, pod, container, r.PodExec, tempCAPath) + } + } + revision = standardRevision + + case cluster.Status.PGTDERevision != "": + // Phase 1: vault config changed, pod still has old credentials. + // Fetch new credentials to temp files on /pgdata (persistent volume) + // and change the provider to use those paths. The temp files survive + // the pod restart so pg_tde can read them until phase 2 runs. + log.Info("changing vault provider using temporary credentials") + if err = fetchSecretToTempFile(ctx, r.Client, r.PodExec, cluster.Namespace, + vault.TokenSecret, pod, container, tempTokenPath); err != nil { + return errors.Wrap(err, "token secret") + } + + if vault.CASecret.Name != "" && vault.CASecret.Key != "" { + if err = fetchSecretToTempFile(ctx, r.Client, r.PodExec, cluster.Namespace, + vault.CASecret, pod, container, tempCAPath); err != nil { + return errors.Wrap(err, "CA secret") + } + } + + err = errors.WithStack( + pgtde.ReconcileVaultProvider(ctx, pgExecutor, cluster, tempTokenPath, tempCAPath)) + revision = tempRevision + + default: + // Initial setup: PGTDERevision is empty, use standard paths. + err = errors.WithStack( + pgtde.ReconcileVaultProvider(ctx, pgExecutor, cluster, tokenPath, caPath)) + revision = standardRevision + } + } + + if err == nil { + cluster.Status.PGTDERevision = revision + if err := patchStatus(); err != nil { + return errors.Wrap(err, "patch status") + } } return err } +// fetchSecretToTempFile reads a key from a Kubernetes Secret and writes it +// to a temporary file inside a pod container. +func fetchSecretToTempFile( + ctx context.Context, + k8sClient client.Reader, + podExec runtime.PodExecutor, + namespace string, + secretRef v1beta1.PGTDESecretObjectReference, + pod *corev1.Pod, + container string, + destPath string, +) error { + secret := &corev1.Secret{} + if err := k8sClient.Get(ctx, client.ObjectKey{ + Namespace: namespace, + Name: secretRef.Name, + }, secret); err != nil { + return errors.Wrapf(err, "get secret %q", secretRef.Name) + } + data, ok := secret.Data[secretRef.Key] + if !ok { + return errors.Errorf("key %q not found in secret %q", secretRef.Key, secretRef.Name) + } + + var stdout, stderr bytes.Buffer + err := podExec(ctx, pod.Namespace, pod.Name, container, + bytes.NewReader(data), &stdout, &stderr, + "bash", "-c", fmt.Sprintf("cat > %s && chmod 600 %s", destPath, destPath)) + if err != nil { + return errors.Wrapf(err, "write %s: %s", destPath, stderr.String()) + } + return nil +} + +// cleanupTempFile removes a temporary file from a pod container (best-effort). +func cleanupTempFile( + ctx context.Context, + pod *corev1.Pod, + container string, + podExec runtime.PodExecutor, + path string, +) { + var stdout, stderr bytes.Buffer + _ = podExec(ctx, pod.Namespace, pod.Name, container, + nil, &stdout, &stderr, + "bash", "-c", fmt.Sprintf("rm -f %s", path)) +} + // reconcilePostgresUsers writes the objects necessary to manage users and their // passwords in PostgreSQL. func (r *Reconciler) reconcilePostgresUsers( diff --git a/internal/naming/annotations.go b/internal/naming/annotations.go index ec04eb0e9a..9a48074fca 100644 --- a/internal/naming/annotations.go +++ b/internal/naming/annotations.go @@ -81,4 +81,7 @@ const ( // is present, the controller will not update the ConfigMap, allowing users to make custom // modifications that won't be overwritten during reconciliation. OverrideConfigAnnotation = perconaAnnotationPrefix + "override-config" + + // K8SPG-911 + TDEInstalledAnnotation = perconaAnnotationPrefix + "tde-installed" ) diff --git a/internal/naming/names.go b/internal/naming/names.go index 0ccd80abbe..af66c6baeb 100644 --- a/internal/naming/names.go +++ b/internal/naming/names.go @@ -134,6 +134,21 @@ const ( ReplicationCACertPath = "replication/ca.crt" ) +const ( + // PGTDEVolume is the name of the pg_tde secret volume and volume mount in a + // PostgreSQL instance Pod + PGTDEVolume = "pg-tde" + + // PGTDEMountPath is the path for mounting the pg_tde secret + PGTDEMountPath = "/pgconf/tde" + + // PGTDEVaultProvider is the name of the Vault provider + PGTDEVaultProvider = "vault-provider" + + // PGTDEGlobalKey is the name of the global key + PGTDEGlobalKey = "global-master-key" +) + const ( // PGBackRestRepoContainerName is the name assigned to the container used to run pgBackRest PGBackRestRepoContainerName = "pgbackrest" diff --git a/internal/patroni/config.go b/internal/patroni/config.go index f0387a36c0..3308f8a9b6 100644 --- a/internal/patroni/config.go +++ b/internal/patroni/config.go @@ -157,6 +157,14 @@ func clusterYAML( }, } + if cluster.Spec.Extensions.PGTDE.Enabled { + postgresqlSection := root["postgresql"].(map[string]any) + postgresqlSection["bin_name"] = map[string]any{ + "pg_basebackup": "pg_tde_basebackup", + "pg_rewind": "pg_tde_rewind", + } + } + if !ClusterBootstrapped(cluster) { // Patroni has not yet bootstrapped. Populate the "bootstrap.dcs" field to // facilitate it. When Patroni is already bootstrapped, this field is ignored. diff --git a/internal/patroni/config_test.go b/internal/patroni/config_test.go index a1edec6386..997f52d404 100644 --- a/internal/patroni/config_test.go +++ b/internal/patroni/config_test.go @@ -121,6 +121,49 @@ watchdog: assert.Equal(t, labels["postgres-operator.crunchydata.com/cluster"], "cluster-name") }) + t.Run("PGTDE enabled adds bin_name", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + err := cluster.Default(context.Background(), nil) + assert.NilError(t, err) + cluster.Namespace = "some-namespace" + cluster.Name = "cluster-name" + cluster.Spec.PostgresVersion = 17 + cluster.Spec.Extensions.PGTDE.Enabled = true + + data, err := clusterYAML(cluster, postgres.HBAs{}, postgres.Parameters{}) + assert.NilError(t, err) + + var parsed map[string]any + assert.NilError(t, yaml.Unmarshal([]byte(data), &parsed)) + + pgSection, ok := parsed["postgresql"].(map[string]any) + assert.Assert(t, ok, "expected postgresql section") + binName, ok := pgSection["bin_name"].(map[string]any) + assert.Assert(t, ok, "expected postgresql.bin_name section") + + assert.Equal(t, binName["pg_basebackup"], "pg_tde_basebackup") + assert.Equal(t, binName["pg_rewind"], "pg_tde_rewind") + }) + + t.Run("PGTDE disabled no bin_name", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + err := cluster.Default(context.Background(), nil) + assert.NilError(t, err) + cluster.Namespace = "some-namespace" + cluster.Name = "cluster-name" + + data, err := clusterYAML(cluster, postgres.HBAs{}, postgres.Parameters{}) + assert.NilError(t, err) + + var parsed map[string]any + assert.NilError(t, yaml.Unmarshal([]byte(data), &parsed)) + + pgSection, ok := parsed["postgresql"].(map[string]any) + assert.Assert(t, ok, "expected postgresql section") + _, hasBinName := pgSection["bin_name"] + assert.Assert(t, !hasBinName, "expected no bin_name when PGTDE is disabled") + }) + t.Run(">PG10", func(t *testing.T) { cluster := new(v1beta1.PostgresCluster) err := cluster.Default(context.Background(), nil) diff --git a/internal/pgbackrest/config.go b/internal/pgbackrest/config.go index 18ecf847d7..56228a18bb 100644 --- a/internal/pgbackrest/config.go +++ b/internal/pgbackrest/config.go @@ -173,7 +173,7 @@ func MakePGBackrestLogDir(template *corev1.PodTemplateSpec, // - Renames the data directory as needed to bootstrap the cluster using the restored database. // This ensures compatibility with the "existing" bootstrap method that is included in the // Patroni config when bootstrapping a cluster using an existing data directory. -func RestoreCommand(pgdata, hugePagesSetting, fetchKeyCommand string, _ []*corev1.PersistentVolumeClaim, args ...string) []string { +func RestoreCommand(pgdata, hugePagesSetting, fetchKeyCommand string, _ []*corev1.PersistentVolumeClaim, tdeEnabled bool, args ...string) []string { ps := postgres.NewParameterSet() ps.Add("data_directory", pgdata) ps.Add("huge_pages", hugePagesSetting) @@ -187,6 +187,10 @@ func RestoreCommand(pgdata, hugePagesSetting, fetchKeyCommand string, _ []*corev // progress during recovery. ps.Add("hot_standby", "on") + if tdeEnabled { + ps.Add("shared_preload_libraries", "pg_tde") + } + if fetchKeyCommand != "" { ps.Add("encryption_key_command", fetchKeyCommand) } diff --git a/internal/pgbackrest/config_test.go b/internal/pgbackrest/config_test.go index 64d7e13000..db1e45de7a 100644 --- a/internal/pgbackrest/config_test.go +++ b/internal/pgbackrest/config_test.go @@ -341,7 +341,7 @@ func TestRestoreCommand(t *testing.T) { "--stanza=" + DefaultStanzaName, "--pg1-path=" + pgdata, "--repo=1", } - command := RestoreCommand(pgdata, "try", "", nil, strings.Join(opts, " ")) + command := RestoreCommand(pgdata, "try", "", nil, false, strings.Join(opts, " ")) assert.DeepEqual(t, command[:3], []string{"bash", "-ceu", "--"}) assert.Assert(t, len(command) > 3) @@ -358,7 +358,7 @@ func TestRestoreCommand(t *testing.T) { func TestRestoreCommandPrettyYAML(t *testing.T) { assert.Assert(t, cmp.MarshalContains( - RestoreCommand("/dir", "try", "", nil, "--options"), + RestoreCommand("/dir", "try", "", nil, false, "--options"), "\n- |", ), "expected literal block scalar") @@ -367,7 +367,7 @@ func TestRestoreCommandPrettyYAML(t *testing.T) { func TestRestoreCommandTDE(t *testing.T) { assert.Assert(t, cmp.MarshalContains( - RestoreCommand("/dir", "try", "echo testValue", nil, "--options"), + RestoreCommand("/dir", "try", "echo testValue", nil, false, "--options"), "encryption_key_command = 'echo testValue'", ), "expected encryption_key_command setting") diff --git a/internal/pgtde/postgres.go b/internal/pgtde/postgres.go new file mode 100644 index 0000000000..abe2b5c4ae --- /dev/null +++ b/internal/pgtde/postgres.go @@ -0,0 +1,281 @@ +package pgtde + +import ( + "context" + "fmt" + "strings" + + "github.com/pkg/errors" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/types" + "k8s.io/client-go/tools/record" + + "github.com/percona/percona-postgresql-operator/v2/internal/logging" + "github.com/percona/percona-postgresql-operator/v2/internal/naming" + "github.com/percona/percona-postgresql-operator/v2/internal/postgres" + crunchyv1beta1 "github.com/percona/percona-postgresql-operator/v2/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +const ( + // TempTokenPath is where the new vault token is written inside the pod + // during a vault provider change (before the volume is updated). + // Stored under /pgdata so it survives pod restarts (persistent volume). + TempTokenPath = "/pgdata/tde-new-token" + // TempCAPath is where the new CA certificate is written inside the pod + // during a vault provider change (before the volume is updated). + // Stored under /pgdata so it survives pod restarts (persistent volume). + TempCAPath = "/pgdata/tde-new-ca.crt" +) + +// enableInPostgreSQL installs pg_tde extension in every database. +func enableInPostgreSQL(ctx context.Context, exec postgres.Executor) error { + log := logging.FromContext(ctx) + + stdout, stderr, err := exec.ExecInAllDatabases(ctx, + strings.Join([]string{ + `SET client_min_messages = WARNING;`, + `CREATE EXTENSION IF NOT EXISTS pg_tde;`, + `ALTER EXTENSION pg_tde UPDATE;`, + }, "\n"), + map[string]string{ + "ON_ERROR_STOP": "on", // Abort when any one command fails. + "QUIET": "on", // Do not print successful commands to stdout. + }) + + log.V(1).Info("enabled pg_tde", "stdout", stdout, "stderr", stderr) + + return err +} + +func disableInPostgreSQL(ctx context.Context, exec postgres.Executor) error { + log := logging.FromContext(ctx) + + stdout, stderr, err := exec.ExecInAllDatabases(ctx, + strings.Join([]string{ + `SET client_min_messages = WARNING;`, + `DROP EXTENSION IF EXISTS pg_tde;`, + }, "\n"), + map[string]string{ + "ON_ERROR_STOP": "on", // Abort when any one command fails. + "QUIET": "on", // Do not print successful commands to stdout. + }) + + log.V(1).Info("disabled pg_tde", "stdout", stdout, "stderr", stderr) + + return err +} + +func ReconcileExtension(ctx context.Context, exec postgres.Executor, record record.EventRecorder, cluster *crunchyv1beta1.PostgresCluster) error { + if !cluster.Spec.Extensions.PGTDE.Enabled { + err := disableInPostgreSQL(ctx, exec) + if err != nil { + record.Event(cluster, corev1.EventTypeWarning, "pgTdeEnabled", "Unable to disable pg_tde") + return err + } + + meta.SetStatusCondition(&cluster.Status.Conditions, metav1.Condition{ + Type: crunchyv1beta1.PGTDEEnabled, + Status: metav1.ConditionFalse, + Reason: "Disabled", + Message: "pg_tde is disabled in PerconaPGCluster", + ObservedGeneration: cluster.GetGeneration(), + }) + + return nil + } + + err := enableInPostgreSQL(ctx, exec) + if err != nil { + record.Event(cluster, corev1.EventTypeWarning, "pgTdeDisabled", "Unable to install pg_tde") + return err + } + + meta.SetStatusCondition(&cluster.Status.Conditions, metav1.Condition{ + Type: crunchyv1beta1.PGTDEEnabled, + Status: metav1.ConditionTrue, + Reason: "Enabled", + Message: "pg_tde is enabled in PerconaPGCluster", + ObservedGeneration: cluster.GetGeneration(), + }) + + return nil +} + +func PostgreSQLParameters(outParameters *postgres.Parameters) { + outParameters.Mandatory.AppendToList("shared_preload_libraries", "pg_tde") + outParameters.Mandatory.Add("pg_tde.wal_encrypt", "off") +} + +// VaultCredentialPaths returns the standard volume mount paths for the vault +// token and CA certificate based on the vault spec's secret key names. +func VaultCredentialPaths(vault *crunchyv1beta1.PGTDEVaultSpec) (tokenPath, caPath string) { + tokenPath = naming.PGTDEMountPath + "/" + vault.TokenSecret.Key + if vault.CASecret.Key != "" { + caPath = naming.PGTDEMountPath + "/" + vault.CASecret.Key + } + return tokenPath, caPath +} + +// TempVaultCredentialPaths returns the temporary file paths used during a vault +// provider change, before the pod volume is updated with new credentials. +func TempVaultCredentialPaths(vault *crunchyv1beta1.PGTDEVaultSpec) (tokenPath, caPath string) { + tokenPath = TempTokenPath + if vault.CASecret.Name != "" && vault.CASecret.Key != "" { + caPath = TempCAPath + } + return tokenPath, caPath +} + +var errAlreadyExists = errors.New("already exists") + +func addVaultProvider(ctx context.Context, exec postgres.Executor, vault *crunchyv1beta1.PGTDEVaultSpec, tokenPath, caPath string) error { + log := logging.FromContext(ctx) + + stdout, stderr, err := exec.Exec(ctx, + strings.NewReader(strings.Join([]string{ + // Quiet NOTICE messages from IF NOT EXISTS statements. + // - https://www.postgresql.org/docs/current/runtime-config-client.html + `SET client_min_messages = WARNING;`, + `SELECT pg_tde_add_global_key_provider_vault_v2( + :'provider_name', :'vault_host', :'vault_mount_path', :'token_path', NULLIF(:'ca_path', '') + );`, + }, "\n")), + map[string]string{ + "ON_ERROR_STOP": "on", // Abort when any one statement fails. + "QUIET": "on", // Do not print successful statements to stdout. + "provider_name": naming.PGTDEVaultProvider, + "vault_host": vault.Host, + "vault_mount_path": vault.MountPath, + "token_path": tokenPath, + "ca_path": caPath, + }, nil) + + if err != nil { + log.Info("failed to add pg_tde vault provider", "stdout", stdout, "stderr", stderr) + } else { + log.Info("added pg_tde vault provider", "stdout", stdout, "stderr", stderr) + } + + if strings.Contains(stderr, "already exists") { + return errAlreadyExists + } + + return err +} + +func createGlobalKey(ctx context.Context, exec postgres.Executor, clusterID types.UID) error { + log := logging.FromContext(ctx) + + globalKey := fmt.Sprintf("%s-%s", naming.PGTDEGlobalKey, clusterID) + + stdout, stderr, err := exec.Exec(ctx, + strings.NewReader(strings.Join([]string{ + // Quiet NOTICE messages from IF NOT EXISTS statements. + // - https://www.postgresql.org/docs/current/runtime-config-client.html + `SET client_min_messages = WARNING;`, + `SELECT pg_tde_create_key_using_global_key_provider(:'global_key', :'provider_name');`, + }, "\n")), + map[string]string{ + "ON_ERROR_STOP": "on", // Abort when any one statement fails. + "QUIET": "on", // Do not print successful statements to stdout. + "provider_name": naming.PGTDEVaultProvider, + "global_key": globalKey, + }, nil) + + if err != nil { + log.Info("failed to create global key", "globalKey", globalKey, "stdout", stdout, "stderr", stderr) + } else { + log.Info("created global key", "globalKey", globalKey, "stdout", stdout, "stderr", stderr) + } + + if strings.Contains(stderr, "already exists") { + return errAlreadyExists + } + + return err +} + +func setDefaultKey(ctx context.Context, exec postgres.Executor, clusterID types.UID) error { + log := logging.FromContext(ctx) + + globalKey := fmt.Sprintf("%s-%s", naming.PGTDEGlobalKey, clusterID) + + stdout, stderr, err := exec.Exec(ctx, + strings.NewReader(strings.Join([]string{ + // Quiet NOTICE messages from IF NOT EXISTS statements. + // - https://www.postgresql.org/docs/current/runtime-config-client.html + `SET client_min_messages = WARNING;`, + `SELECT pg_tde_set_default_key_using_global_key_provider(:'global_key', :'provider_name');`, + }, "\n")), + map[string]string{ + "ON_ERROR_STOP": "on", // Abort when any one statement fails. + "QUIET": "on", // Do not print successful statements to stdout. + "provider_name": naming.PGTDEVaultProvider, + "global_key": globalKey, + }, nil) + + if err != nil { + log.Info("failed to set global key", "globalKey", globalKey, "stdout", stdout, "stderr", stderr) + } else { + log.Info("set global key", "globalKey", globalKey, "stdout", stdout, "stderr", stderr) + } + + return err +} + +func changeVaultProvider(ctx context.Context, exec postgres.Executor, vault *crunchyv1beta1.PGTDEVaultSpec, tokenPath, caPath string) error { + log := logging.FromContext(ctx) + + stdout, stderr, err := exec.Exec(ctx, + strings.NewReader(strings.Join([]string{ + // Quiet NOTICE messages from IF NOT EXISTS statements. + // - https://www.postgresql.org/docs/current/runtime-config-client.html + `SET client_min_messages = WARNING;`, + `SELECT pg_tde_change_global_key_provider_vault_v2( + :'provider_name', :'vault_host', :'vault_mount_path', :'token_path', NULLIF(:'ca_path', '') + );`, + }, "\n")), + map[string]string{ + "ON_ERROR_STOP": "on", // Abort when any one statement fails. + "QUIET": "on", // Do not print successful statements to stdout. + "provider_name": naming.PGTDEVaultProvider, + "vault_host": vault.Host, + "vault_mount_path": vault.MountPath, + "token_path": tokenPath, + "ca_path": caPath, + }, nil) + + if err != nil { + log.Info("failed to change pg_tde vault provider", "stdout", stdout, "stderr", stderr) + } else { + log.Info("changed pg_tde vault provider", "stdout", stdout, "stderr", stderr) + } + + return err +} + +// ReconcileVaultProvider configures or updates the pg_tde vault key provider. +// tokenPath and caPath are the file paths inside the pod where the vault +// credentials can be read. For initial setup these are the standard volume +// mount paths; for provider changes they may be temporary file paths. +func ReconcileVaultProvider(ctx context.Context, exec postgres.Executor, cluster *crunchyv1beta1.PostgresCluster, tokenPath, caPath string) error { + vault := cluster.Spec.Extensions.PGTDE.Vault + + if cluster.Status.PGTDERevision == "" { + err := addVaultProvider(ctx, exec, vault, tokenPath, caPath) + + if err == nil || errors.Is(err, errAlreadyExists) { + err = createGlobalKey(ctx, exec, cluster.UID) + } + + if err == nil || errors.Is(err, errAlreadyExists) { + err = setDefaultKey(ctx, exec, cluster.UID) + } + + return err + } + + return changeVaultProvider(ctx, exec, vault, tokenPath, caPath) +} diff --git a/internal/pgtde/postgres_test.go b/internal/pgtde/postgres_test.go new file mode 100644 index 0000000000..bca6fe3d56 --- /dev/null +++ b/internal/pgtde/postgres_test.go @@ -0,0 +1,604 @@ +package pgtde + +import ( + "context" + "errors" + "fmt" + "io" + "strings" + "testing" + + "gotest.tools/v3/assert" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/types" + "k8s.io/client-go/tools/record" + + "github.com/percona/percona-postgresql-operator/v2/internal/naming" + "github.com/percona/percona-postgresql-operator/v2/internal/postgres" + crunchyv1beta1 "github.com/percona/percona-postgresql-operator/v2/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestEnableInPostgreSQL(t *testing.T) { + expected := errors.New("whoops") + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + assert.Assert(t, stdout != nil, "should capture stdout") + assert.Assert(t, stderr != nil, "should capture stderr") + + assert.Assert(t, strings.Contains(strings.Join(command, "\n"), + `SELECT datname FROM pg_catalog.pg_database`, + ), "expected all databases and templates") + + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + assert.Equal(t, string(b), strings.Join([]string{ + `SET client_min_messages = WARNING;`, + `CREATE EXTENSION IF NOT EXISTS pg_tde;`, + `ALTER EXTENSION pg_tde UPDATE;`, + }, "\n")) + + return expected + } + + ctx := t.Context() + assert.Equal(t, expected, enableInPostgreSQL(ctx, exec)) +} + +func TestDisableInPostgreSQL(t *testing.T) { + expected := errors.New("whoops") + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + assert.Assert(t, stdout != nil, "should capture stdout") + assert.Assert(t, stderr != nil, "should capture stderr") + + assert.Assert(t, strings.Contains(strings.Join(command, "\n"), + `SELECT datname FROM pg_catalog.pg_database`, + ), "expected all databases and templates") + + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + assert.Equal(t, string(b), strings.Join([]string{ + `SET client_min_messages = WARNING;`, + `DROP EXTENSION IF EXISTS pg_tde;`, + }, "\n")) + + return expected + } + + ctx := context.Background() + assert.Equal(t, expected, disableInPostgreSQL(ctx, exec)) +} + +func TestPostgreSQLParameters(t *testing.T) { + parameters := postgres.Parameters{ + Mandatory: postgres.NewParameterSet(), + } + + // No comma when empty. + PostgreSQLParameters(¶meters) + + assert.Assert(t, parameters.Default == nil) + assert.DeepEqual(t, parameters.Mandatory.AsMap(), map[string]string{ + "shared_preload_libraries": "pg_tde", + "pg_tde.wal_encrypt": "off", + }) + + // Appended when not empty. + parameters.Mandatory.Add("shared_preload_libraries", "some,existing") + PostgreSQLParameters(¶meters) + + assert.Assert(t, parameters.Default == nil) + assert.DeepEqual(t, parameters.Mandatory.AsMap(), map[string]string{ + "shared_preload_libraries": "some,existing,pg_tde", + "pg_tde.wal_encrypt": "off", + }) +} + +func TestAddVaultProvider(t *testing.T) { + t.Run("with CA secret", func(t *testing.T) { + expected := errors.New("whoops") + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + assert.Assert(t, stdout != nil, "should capture stdout") + assert.Assert(t, stderr != nil, "should capture stderr") + + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + sql := string(b) + + assert.Assert(t, strings.Contains(sql, "pg_tde_add_global_key_provider_vault_v2")) + + joined := strings.Join(command, " ") + assert.Assert(t, strings.Contains(joined, "--set=provider_name="+naming.PGTDEVaultProvider)) + assert.Assert(t, strings.Contains(joined, "--set=vault_host=https://vault.example.com")) + assert.Assert(t, strings.Contains(joined, "--set=vault_mount_path=secret/data")) + assert.Assert(t, strings.Contains(joined, "--set=token_path="+naming.PGTDEMountPath+"/token-key")) + assert.Assert(t, strings.Contains(joined, "--set=ca_path="+naming.PGTDEMountPath+"/ca-key")) + + return expected + } + + ctx := context.Background() + vault := &crunchyv1beta1.PGTDEVaultSpec{ + Host: "https://vault.example.com", + MountPath: "secret/data", + TokenSecret: crunchyv1beta1.PGTDESecretObjectReference{ + Name: "token-secret", + Key: "token-key", + }, + CASecret: crunchyv1beta1.PGTDESecretObjectReference{ + Name: "ca-secret", + Key: "ca-key", + }, + } + tokenPath, caPath := VaultCredentialPaths(vault) + assert.Equal(t, expected, addVaultProvider(ctx, exec, vault, tokenPath, caPath)) + }) + + t.Run("already exists", func(t *testing.T) { + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + _, _ = stderr.Write([]byte("ERROR: already exists")) + return nil + } + + ctx := context.Background() + vault := &crunchyv1beta1.PGTDEVaultSpec{ + Host: "https://vault.example.com", + MountPath: "secret/data", + TokenSecret: crunchyv1beta1.PGTDESecretObjectReference{ + Name: "token-secret", + Key: "token-key", + }, + } + tokenPath, caPath := VaultCredentialPaths(vault) + assert.Assert(t, errors.Is(addVaultProvider(ctx, exec, vault, tokenPath, caPath), errAlreadyExists)) + }) + + t.Run("without CA secret", func(t *testing.T) { + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + joined := strings.Join(command, " ") + assert.Assert(t, strings.Contains(joined, "--set=ca_path="), + "ca_path should be set to empty string") + + return nil + } + + ctx := t.Context() + vault := &crunchyv1beta1.PGTDEVaultSpec{ + Host: "https://vault.example.com", + MountPath: "secret/data", + TokenSecret: crunchyv1beta1.PGTDESecretObjectReference{ + Name: "token-secret", + Key: "token-key", + }, + } + tokenPath, caPath := VaultCredentialPaths(vault) + assert.NilError(t, addVaultProvider(ctx, exec, vault, tokenPath, caPath)) + }) +} + +func TestCreateGlobalKey(t *testing.T) { + t.Run("success", func(t *testing.T) { + expected := errors.New("whoops") + clusterID := types.UID("test-cluster-uid") + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + assert.Assert(t, stdout != nil, "should capture stdout") + assert.Assert(t, stderr != nil, "should capture stderr") + + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + sql := string(b) + + assert.Assert(t, strings.Contains(sql, "pg_tde_create_key_using_global_key_provider")) + + joined := strings.Join(command, " ") + assert.Assert(t, strings.Contains(joined, "--set=provider_name="+naming.PGTDEVaultProvider)) + assert.Assert(t, strings.Contains(joined, + "--set=global_key="+fmt.Sprintf("%s-%s", naming.PGTDEGlobalKey, clusterID))) + + return expected + } + + ctx := t.Context() + assert.Equal(t, expected, createGlobalKey(ctx, exec, clusterID)) + }) + + t.Run("already exists", func(t *testing.T) { + clusterID := types.UID("test-cluster-uid") + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + _, _ = stderr.Write([]byte("ERROR: already exists")) + return nil + } + + ctx := t.Context() + assert.Assert(t, errors.Is(createGlobalKey(ctx, exec, clusterID), errAlreadyExists)) + }) +} + +func TestSetDefaultKey(t *testing.T) { + t.Run("success", func(t *testing.T) { + expected := errors.New("whoops") + clusterID := types.UID("test-cluster-uid") + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + assert.Assert(t, stdout != nil, "should capture stdout") + assert.Assert(t, stderr != nil, "should capture stderr") + + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + sql := string(b) + + assert.Assert(t, strings.Contains(sql, "pg_tde_set_default_key_using_global_key_provider")) + + joined := strings.Join(command, " ") + assert.Assert(t, strings.Contains(joined, "--set=provider_name="+naming.PGTDEVaultProvider)) + assert.Assert(t, strings.Contains(joined, + "--set=global_key="+fmt.Sprintf("%s-%s", naming.PGTDEGlobalKey, clusterID))) + + return expected + } + + ctx := context.Background() + assert.Equal(t, expected, setDefaultKey(ctx, exec, clusterID)) + }) +} + +func TestChangeVaultProvider(t *testing.T) { + t.Run("with CA secret", func(t *testing.T) { + expected := errors.New("whoops") + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + assert.Assert(t, stdout != nil, "should capture stdout") + assert.Assert(t, stderr != nil, "should capture stderr") + + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + sql := string(b) + + assert.Assert(t, strings.Contains(sql, "pg_tde_change_global_key_provider_vault_v2")) + + joined := strings.Join(command, " ") + assert.Assert(t, strings.Contains(joined, "--set=provider_name="+naming.PGTDEVaultProvider)) + assert.Assert(t, strings.Contains(joined, "--set=vault_host=https://vault.example.com")) + assert.Assert(t, strings.Contains(joined, "--set=vault_mount_path=secret/data")) + assert.Assert(t, strings.Contains(joined, "--set=token_path="+naming.PGTDEMountPath+"/token-key")) + assert.Assert(t, strings.Contains(joined, "--set=ca_path="+naming.PGTDEMountPath+"/ca-key")) + + return expected + } + + ctx := context.Background() + vault := &crunchyv1beta1.PGTDEVaultSpec{ + Host: "https://vault.example.com", + MountPath: "secret/data", + TokenSecret: crunchyv1beta1.PGTDESecretObjectReference{ + Name: "token-secret", + Key: "token-key", + }, + CASecret: crunchyv1beta1.PGTDESecretObjectReference{ + Name: "ca-secret", + Key: "ca-key", + }, + } + tokenPath, caPath := VaultCredentialPaths(vault) + assert.Equal(t, expected, changeVaultProvider(ctx, exec, vault, tokenPath, caPath)) + }) + + t.Run("without CA secret", func(t *testing.T) { + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + joined := strings.Join(command, " ") + assert.Assert(t, strings.Contains(joined, "--set=ca_path="), + "ca_path should be set to empty string") + + return nil + } + + ctx := context.Background() + vault := &crunchyv1beta1.PGTDEVaultSpec{ + Host: "https://vault.example.com", + MountPath: "secret/data", + TokenSecret: crunchyv1beta1.PGTDESecretObjectReference{ + Name: "token-secret", + Key: "token-key", + }, + } + tokenPath, caPath := VaultCredentialPaths(vault) + assert.NilError(t, changeVaultProvider(ctx, exec, vault, tokenPath, caPath)) + }) +} + +func TestReconcileExtension(t *testing.T) { + t.Run("disabled successfully", func(t *testing.T) { + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + return nil + } + + ctx := t.Context() + recorder := record.NewFakeRecorder(10) + cluster := &crunchyv1beta1.PostgresCluster{} + cluster.Spec.Extensions.PGTDE.Enabled = false + cluster.Generation = 1 + + err := ReconcileExtension(ctx, exec, recorder, cluster) + assert.NilError(t, err) + + condition := meta.FindStatusCondition(cluster.Status.Conditions, crunchyv1beta1.PGTDEEnabled) + assert.Assert(t, condition != nil) + assert.Equal(t, condition.Status, metav1.ConditionFalse) + assert.Equal(t, condition.Reason, "Disabled") + assert.Equal(t, condition.Message, "pg_tde is disabled in PerconaPGCluster") + assert.Equal(t, condition.ObservedGeneration, int64(1)) + }) + + t.Run("disable error records event", func(t *testing.T) { + expected := errors.New("disable failed") + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + return expected + } + + ctx := t.Context() + recorder := record.NewFakeRecorder(10) + cluster := &crunchyv1beta1.PostgresCluster{} + cluster.Spec.Extensions.PGTDE.Enabled = false + + err := ReconcileExtension(ctx, exec, recorder, cluster) + assert.Equal(t, expected, err) + + select { + case event := <-recorder.Events: + assert.Assert(t, strings.Contains(event, "pgTdeEnabled")) + assert.Assert(t, strings.Contains(event, "Unable to disable pg_tde")) + default: + t.Fatal("expected event to be recorded") + } + }) + + t.Run("enabled successfully", func(t *testing.T) { + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + return nil + } + + ctx := t.Context() + recorder := record.NewFakeRecorder(10) + cluster := &crunchyv1beta1.PostgresCluster{} + cluster.Spec.Extensions.PGTDE.Enabled = true + cluster.Generation = 2 + + err := ReconcileExtension(ctx, exec, recorder, cluster) + assert.NilError(t, err) + + condition := meta.FindStatusCondition(cluster.Status.Conditions, crunchyv1beta1.PGTDEEnabled) + assert.Assert(t, condition != nil) + assert.Equal(t, condition.Status, metav1.ConditionTrue) + assert.Equal(t, condition.Reason, "Enabled") + assert.Equal(t, condition.Message, "pg_tde is enabled in PerconaPGCluster") + assert.Equal(t, condition.ObservedGeneration, int64(2)) + }) + + t.Run("enable error records event", func(t *testing.T) { + expected := errors.New("enable failed") + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + return expected + } + + ctx := t.Context() + recorder := record.NewFakeRecorder(10) + cluster := &crunchyv1beta1.PostgresCluster{} + cluster.Spec.Extensions.PGTDE.Enabled = true + + err := ReconcileExtension(ctx, exec, recorder, cluster) + assert.Equal(t, expected, err) + + select { + case event := <-recorder.Events: + assert.Assert(t, strings.Contains(event, "pgTdeDisabled")) + assert.Assert(t, strings.Contains(event, "Unable to install pg_tde")) + default: + t.Fatal("expected event to be recorded") + } + }) +} + +func TestReconcileVaultProvider(t *testing.T) { + vault := &crunchyv1beta1.PGTDEVaultSpec{ + Host: "https://vault.example.com", + MountPath: "secret/data", + TokenSecret: crunchyv1beta1.PGTDESecretObjectReference{ + Name: "token-secret", + Key: "token-key", + }, + } + tokenPath, caPath := VaultCredentialPaths(vault) + + t.Run("first time all succeed", func(t *testing.T) { + callCount := 0 + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + callCount++ + return nil + } + + ctx := t.Context() + cluster := &crunchyv1beta1.PostgresCluster{} + cluster.Spec.Extensions.PGTDE.Vault = vault + cluster.UID = "test-uid" + + err := ReconcileVaultProvider(ctx, exec, cluster, tokenPath, caPath) + assert.NilError(t, err) + assert.Equal(t, callCount, 3) + }) + + t.Run("first time addVaultProvider fails", func(t *testing.T) { + expected := errors.New("vault error") + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + return expected + } + + ctx := t.Context() + cluster := &crunchyv1beta1.PostgresCluster{} + cluster.Spec.Extensions.PGTDE.Vault = vault + cluster.UID = "test-uid" + + err := ReconcileVaultProvider(ctx, exec, cluster, tokenPath, caPath) + assert.Equal(t, expected, err) + }) + + t.Run("first time addVaultProvider already exists proceeds", func(t *testing.T) { + callCount := 0 + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + callCount++ + if callCount == 1 { + _, _ = stderr.Write([]byte("already exists")) + return nil + } + return nil + } + + ctx := t.Context() + cluster := &crunchyv1beta1.PostgresCluster{} + cluster.Spec.Extensions.PGTDE.Vault = vault + cluster.UID = "test-uid" + + err := ReconcileVaultProvider(ctx, exec, cluster, tokenPath, caPath) + assert.NilError(t, err) + assert.Equal(t, callCount, 3) + }) + + t.Run("first time createGlobalKey fails", func(t *testing.T) { + expected := errors.New("key error") + callCount := 0 + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + callCount++ + if callCount == 2 { + return expected + } + return nil + } + + ctx := t.Context() + cluster := &crunchyv1beta1.PostgresCluster{} + cluster.Spec.Extensions.PGTDE.Vault = vault + cluster.UID = "test-uid" + + err := ReconcileVaultProvider(ctx, exec, cluster, tokenPath, caPath) + assert.Equal(t, expected, err) + assert.Equal(t, callCount, 2) + }) + + t.Run("first time createGlobalKey already exists proceeds", func(t *testing.T) { + callCount := 0 + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + callCount++ + if callCount == 2 { + _, _ = stderr.Write([]byte("already exists")) + return nil + } + return nil + } + + ctx := t.Context() + cluster := &crunchyv1beta1.PostgresCluster{} + cluster.Spec.Extensions.PGTDE.Vault = vault + cluster.UID = "test-uid" + + err := ReconcileVaultProvider(ctx, exec, cluster, tokenPath, caPath) + assert.NilError(t, err) + assert.Equal(t, callCount, 3) + }) + + t.Run("first time setDefaultKey fails", func(t *testing.T) { + expected := errors.New("default key error") + callCount := 0 + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + callCount++ + if callCount == 3 { + return expected + } + return nil + } + + ctx := t.Context() + cluster := &crunchyv1beta1.PostgresCluster{} + cluster.Spec.Extensions.PGTDE.Vault = vault + cluster.UID = "test-uid" + + err := ReconcileVaultProvider(ctx, exec, cluster, tokenPath, caPath) + assert.Equal(t, expected, err) + assert.Equal(t, callCount, 3) + }) + + t.Run("revision set calls changeVaultProvider", func(t *testing.T) { + callCount := 0 + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + callCount++ + b, _ := io.ReadAll(stdin) + assert.Assert(t, strings.Contains(string(b), "pg_tde_change_global_key_provider_vault_v2")) + return nil + } + + ctx := t.Context() + cluster := &crunchyv1beta1.PostgresCluster{} + cluster.Spec.Extensions.PGTDE.Vault = vault + cluster.Status.PGTDERevision = "some-revision" + cluster.UID = "test-uid" + + err := ReconcileVaultProvider(ctx, exec, cluster, tokenPath, caPath) + assert.NilError(t, err) + assert.Equal(t, callCount, 1) + }) + + t.Run("revision set changeVaultProvider fails", func(t *testing.T) { + expected := errors.New("change error") + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + return expected + } + + ctx := t.Context() + cluster := &crunchyv1beta1.PostgresCluster{} + cluster.Spec.Extensions.PGTDE.Vault = vault + cluster.Status.PGTDERevision = "some-revision" + cluster.UID = "test-uid" + + err := ReconcileVaultProvider(ctx, exec, cluster, tokenPath, caPath) + assert.Equal(t, expected, err) + }) +} diff --git a/internal/pgvector/postgres.go b/internal/pgvector/postgres.go index ba3122a03e..34b5f4fc21 100644 --- a/internal/pgvector/postgres.go +++ b/internal/pgvector/postgres.go @@ -12,9 +12,8 @@ func EnableInPostgreSQL(ctx context.Context, exec postgres.Executor) error { log := logging.FromContext(ctx) stdout, stderr, err := exec.ExecInAllDatabases(ctx, - // Quiet the NOTICE from IF EXISTS, and install the pgAudit event triggers. + // Quiet the NOTICE from IF EXISTS, and create pgvector extension. // - https://www.postgresql.org/docs/current/runtime-config-client.html - // - https://github.com/pgaudit/pgaudit#settings `SET client_min_messages = WARNING; CREATE EXTENSION IF NOT EXISTS vector; ALTER EXTENSION vector UPDATE;`, map[string]string{ "ON_ERROR_STOP": "on", // Abort when any one command fails. @@ -30,9 +29,8 @@ func DisableInPostgreSQL(ctx context.Context, exec postgres.Executor) error { log := logging.FromContext(ctx) stdout, stderr, err := exec.ExecInAllDatabases(ctx, - // Quiet the NOTICE from IF EXISTS, and install the pgAudit event triggers. + // Quiet the NOTICE from IF EXISTS, and drop pgvector extension. // - https://www.postgresql.org/docs/current/runtime-config-client.html - // - https://github.com/pgaudit/pgaudit#settings `SET client_min_messages = WARNING; DROP EXTENSION IF EXISTS vector;`, map[string]string{ "ON_ERROR_STOP": "on", // Abort when any one command fails. @@ -44,5 +42,5 @@ func DisableInPostgreSQL(ctx context.Context, exec postgres.Executor) error { return err } -// PostgreSQLParameters sets the parameters required by pgAudit. +// PostgreSQLParameters sets the parameters required by pgvector. func PostgreSQLParameters(outParameters *postgres.Parameters) {} diff --git a/internal/postgres/reconcile.go b/internal/postgres/reconcile.go index f1b11f391a..3205b72e26 100644 --- a/internal/postgres/reconcile.go +++ b/internal/postgres/reconcile.go @@ -55,6 +55,59 @@ func AdditionalConfigVolumeMount() corev1.VolumeMount { } } +// PGTDEVolumeMount returns the name and mount path of the token and certificates for KMS. +func PGTDEVolumeMount() corev1.VolumeMount { + return corev1.VolumeMount{ + Name: naming.PGTDEVolume, + MountPath: naming.PGTDEMountPath, + ReadOnly: true, + } +} + +// PGTDEVolume returns the projected volume for pg_tde Vault secrets (token and optional CA cert). +func PGTDEVolume(vault *v1beta1.PGTDEVaultSpec) corev1.Volume { + volume := corev1.Volume{ + Name: naming.PGTDEVolume, + VolumeSource: corev1.VolumeSource{ + Projected: &corev1.ProjectedVolumeSource{ + DefaultMode: initialize.Int32(0o600), + Sources: []corev1.VolumeProjection{ + {Secret: &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: vault.TokenSecret.Name, + }, + Items: []corev1.KeyToPath{ + { + Key: vault.TokenSecret.Key, + Path: vault.TokenSecret.Key, + }, + }, + }}, + }, + }, + }, + } + + if vault.CASecret.Name != "" { + volume.Projected.Sources = append( + volume.Projected.Sources, corev1.VolumeProjection{ + Secret: &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: vault.CASecret.Name, + }, + Items: []corev1.KeyToPath{ + { + Key: vault.CASecret.Key, + Path: vault.CASecret.Key, + }, + }, + }, + }) + } + + return volume +} + // InstancePod initializes outInstancePod with the database container and the // volumes needed by PostgreSQL. func InstancePod(ctx context.Context, @@ -158,6 +211,11 @@ func InstancePod(ctx context.Context, downwardAPIVolumeMount, } + pgTDEVolumeMount := PGTDEVolumeMount() + if inCluster.Spec.Extensions.PGTDE.Vault != nil { + dbContainerMounts = append(dbContainerMounts, pgTDEVolumeMount) + } + if HugePages2MiRequested(inCluster) { dbContainerMounts = append(dbContainerMounts, corev1.VolumeMount{ @@ -236,6 +294,9 @@ func InstancePod(ctx context.Context, dataVolume, downwardAPIVolume, } + if vault := inCluster.Spec.Extensions.PGTDE.Vault; vault != nil { + outInstancePod.Volumes = append(outInstancePod.Volumes, PGTDEVolume(vault)) + } if HugePages2MiRequested(inCluster) { outInstancePod.Volumes = append(outInstancePod.Volumes, corev1.Volume{ diff --git a/percona/controller/pgbackup/controller.go b/percona/controller/pgbackup/controller.go index 288fca2dc3..ded6449074 100644 --- a/percona/controller/pgbackup/controller.go +++ b/percona/controller/pgbackup/controller.go @@ -677,6 +677,9 @@ func startBackup(ctx context.Context, c client.Client, pb *v2.PerconaPGBackup) e if a := pg.Annotations[pNaming.AnnotationBackupInProgress]; a != "" && a != pb.Name { return errors.Errorf("backup %s already in progress", a) } + + pg.Default() + if pg.Annotations == nil { pg.Annotations = make(map[string]string) } diff --git a/percona/controller/pgcluster/controller_test.go b/percona/controller/pgcluster/controller_test.go index 887d6278d8..0bcf8acb1d 100644 --- a/percona/controller/pgcluster/controller_test.go +++ b/percona/controller/pgcluster/controller_test.go @@ -2400,6 +2400,170 @@ var _ = Describe("CR Validations", Ordered, func() { }) }) }) + + Context("pg_tde validations", Ordered, func() { + When("creating a CR with valid pg_tde configurations", func() { + It("should accept pg_tde enabled with vault on PG 17", func() { + cr, err := readDefaultCR("cr-validation-tde-1", ns) + Expect(err).NotTo(HaveOccurred()) + + cr.Spec.PostgresVersion = 17 + cr.Spec.Extensions.PGTDE = v1beta1.PGTDESpec{ + Enabled: true, + Vault: &v1beta1.PGTDEVaultSpec{ + Host: "https://vault.example.com:8200", + TokenSecret: v1beta1.PGTDESecretObjectReference{ + Name: "vault-token", + Key: "token", + }, + }, + } + + Expect(k8sClient.Create(ctx, cr)).Should(Succeed()) + }) + + It("should accept pg_tde disabled without vault", func() { + cr, err := readDefaultCR("cr-validation-tde-2", ns) + Expect(err).NotTo(HaveOccurred()) + + cr.Spec.PostgresVersion = 17 + cr.Spec.Extensions.PGTDE = v1beta1.PGTDESpec{ + Enabled: false, + } + + Expect(k8sClient.Create(ctx, cr)).Should(Succeed()) + }) + + It("should accept pg_tde not specified at all", func() { + cr, err := readDefaultCR("cr-validation-tde-3", ns) + Expect(err).NotTo(HaveOccurred()) + + cr.Spec.PostgresVersion = 16 + + Expect(k8sClient.Create(ctx, cr)).Should(Succeed()) + }) + + It("should accept pg_tde disabled with vault on PG < 17", func() { + cr, err := readDefaultCR("cr-validation-tde-4", ns) + Expect(err).NotTo(HaveOccurred()) + + cr.Spec.PostgresVersion = 16 + cr.Spec.Extensions.PGTDE = v1beta1.PGTDESpec{ + Enabled: false, + Vault: &v1beta1.PGTDEVaultSpec{ + Host: "https://vault.example.com:8200", + TokenSecret: v1beta1.PGTDESecretObjectReference{ + Name: "vault-token", + Key: "token", + }, + }, + } + + Expect(k8sClient.Create(ctx, cr)).Should(Succeed()) + }) + }) + + When("creating a CR with invalid pg_tde configurations", func() { + It("should reject pg_tde enabled on PG < 17", func() { + cr, err := readDefaultCR("cr-validation-tde-5", ns) + Expect(err).NotTo(HaveOccurred()) + + cr.Spec.PostgresVersion = 16 + cr.Spec.Extensions.PGTDE = v1beta1.PGTDESpec{ + Enabled: true, + Vault: &v1beta1.PGTDEVaultSpec{ + Host: "https://vault.example.com:8200", + TokenSecret: v1beta1.PGTDESecretObjectReference{ + Name: "vault-token", + Key: "token", + }, + }, + } + + err = k8sClient.Create(ctx, cr) + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring( + "pg_tde is only supported for PG17 and above", + )) + }) + + It("should reject pg_tde enabled without vault", func() { + cr, err := readDefaultCR("cr-validation-tde-6", ns) + Expect(err).NotTo(HaveOccurred()) + + cr.Spec.PostgresVersion = 17 + cr.Spec.Extensions.PGTDE = v1beta1.PGTDESpec{ + Enabled: true, + } + + err = k8sClient.Create(ctx, cr) + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring( + "vault is required for enabling pg_tde", + )) + }) + }) + + When("updating a CR with pg_tde transition rules", func() { + It("should reject removing vault while pg_tde is still enabled", func() { + cr, err := readDefaultCR("cr-validation-tde-8", ns) + Expect(err).NotTo(HaveOccurred()) + + cr.Spec.PostgresVersion = 17 + cr.Spec.Extensions.PGTDE = v1beta1.PGTDESpec{ + Enabled: true, + Vault: &v1beta1.PGTDEVaultSpec{ + Host: "https://vault.example.com:8200", + TokenSecret: v1beta1.PGTDESecretObjectReference{ + Name: "vault-token", + Key: "token", + }, + }, + } + Expect(k8sClient.Create(ctx, cr)).Should(Succeed()) + + updated := cr.DeepCopy() + updated.Spec.Extensions.PGTDE = v1beta1.PGTDESpec{ + Enabled: true, + } + + err = k8sClient.Update(ctx, updated) + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring( + "vault is required for enabling pg_tde", + )) + }) + + It("should accept disabling pg_tde while keeping vault", func() { + cr := &v2.PerconaPGCluster{} + Expect(k8sClient.Get(ctx, types.NamespacedName{Name: "cr-validation-tde-8", Namespace: ns}, cr)).Should(Succeed()) + + cr.Spec.Extensions.PGTDE = v1beta1.PGTDESpec{ + Enabled: false, + Vault: &v1beta1.PGTDEVaultSpec{ + Host: "https://vault.example.com:8200", + TokenSecret: v1beta1.PGTDESecretObjectReference{ + Name: "vault-token", + Key: "token", + }, + }, + } + + Expect(k8sClient.Update(ctx, cr)).Should(Succeed()) + }) + + It("should accept removing vault after pg_tde is disabled", func() { + cr := &v2.PerconaPGCluster{} + Expect(k8sClient.Get(ctx, types.NamespacedName{Name: "cr-validation-tde-8", Namespace: ns}, cr)).Should(Succeed()) + + cr.Spec.Extensions.PGTDE = v1beta1.PGTDESpec{ + Enabled: false, + } + + Expect(k8sClient.Update(ctx, cr)).Should(Succeed()) + }) + }) + }) }) var _ = Describe("Init Container", Ordered, func() { diff --git a/pkg/apis/pgv2.percona.com/v2/perconapgcluster_types.go b/pkg/apis/pgv2.percona.com/v2/perconapgcluster_types.go index 42ab3b60b5..f9443fdf0f 100644 --- a/pkg/apis/pgv2.percona.com/v2/perconapgcluster_types.go +++ b/pkg/apis/pgv2.percona.com/v2/perconapgcluster_types.go @@ -51,6 +51,7 @@ type PerconaPGCluster struct { Status PerconaPGClusterStatus `json:"status,omitempty"` } +// +kubebuilder:validation:XValidation:rule="!has(self.extensions) || !has(self.extensions.pg_tde) || !has(self.extensions.pg_tde.enabled) || !self.extensions.pg_tde.enabled || self.postgresVersion >= 17",message="pg_tde is only supported for PG17 and above" // +kubebuilder:validation:XValidation:rule="!has(self.users) || self.postgresVersion >= 15 || self.users.all(u, !has(u.grantPublicSchemaAccess) || !u.grantPublicSchemaAccess)",message="PostgresVersion must be >= 15 if grantPublicSchemaAccess exists and is true" type PerconaPGClusterSpec struct { // +optional @@ -257,7 +258,6 @@ func (cr *PerconaPGCluster) Default() { } t := true - f := false if cr.Spec.Backups.IsEnabled() { if cr.Spec.Backups.TrackLatestRestorableTime == nil { @@ -276,24 +276,62 @@ func (cr *PerconaPGCluster) Default() { } } + cr.SetExtensionDefaults() + + if cr.CompareVersion("2.6.0") >= 0 && cr.Spec.AutoCreateUserSchema == nil { + cr.Spec.AutoCreateUserSchema = &t + } +} + +func (cr *PerconaPGCluster) SetExtensionDefaults() { + // for backward compatibility, delete after 2.11.0 + if cr.Spec.Extensions.BuiltIn.PGStatMonitor != nil { + cr.Spec.Extensions.PGStatMonitor.Enabled = cr.Spec.Extensions.BuiltIn.PGStatMonitor + } + if cr.Spec.Extensions.BuiltIn.PGStatStatements != nil { + cr.Spec.Extensions.PGStatStatements.Enabled = cr.Spec.Extensions.BuiltIn.PGStatStatements + } + if cr.Spec.Extensions.BuiltIn.PGAudit != nil { + cr.Spec.Extensions.PGAudit.Enabled = cr.Spec.Extensions.BuiltIn.PGAudit + } + if cr.Spec.Extensions.BuiltIn.PGRepack != nil { + cr.Spec.Extensions.PGRepack.Enabled = cr.Spec.Extensions.BuiltIn.PGRepack + } + if cr.Spec.Extensions.BuiltIn.PGVector != nil { + cr.Spec.Extensions.PGVector.Enabled = cr.Spec.Extensions.BuiltIn.PGVector + } + + if cr.Spec.Extensions.PGStatMonitor.Enabled == nil { + cr.Spec.Extensions.PGStatMonitor.Enabled = ptr.To(true) + } + if cr.Spec.Extensions.PGStatStatements.Enabled == nil { + cr.Spec.Extensions.PGStatStatements.Enabled = ptr.To(false) + } + if cr.Spec.Extensions.PGAudit.Enabled == nil { + cr.Spec.Extensions.PGAudit.Enabled = ptr.To(true) + } + if cr.Spec.Extensions.PGVector.Enabled == nil { + cr.Spec.Extensions.PGVector.Enabled = ptr.To(false) + } + if cr.Spec.Extensions.PGRepack.Enabled == nil { + cr.Spec.Extensions.PGRepack.Enabled = ptr.To(false) + } + + // for backward compatibility, delete after 2.11.0 if cr.Spec.Extensions.BuiltIn.PGStatMonitor == nil { - cr.Spec.Extensions.BuiltIn.PGStatMonitor = &t + cr.Spec.Extensions.BuiltIn.PGStatMonitor = cr.Spec.Extensions.PGStatMonitor.Enabled } if cr.Spec.Extensions.BuiltIn.PGStatStatements == nil { - cr.Spec.Extensions.BuiltIn.PGStatStatements = &f + cr.Spec.Extensions.BuiltIn.PGStatStatements = cr.Spec.Extensions.PGStatStatements.Enabled } if cr.Spec.Extensions.BuiltIn.PGAudit == nil { - cr.Spec.Extensions.BuiltIn.PGAudit = &t + cr.Spec.Extensions.BuiltIn.PGAudit = cr.Spec.Extensions.PGAudit.Enabled } if cr.Spec.Extensions.BuiltIn.PGVector == nil { - cr.Spec.Extensions.BuiltIn.PGVector = &f + cr.Spec.Extensions.BuiltIn.PGVector = cr.Spec.Extensions.PGVector.Enabled } if cr.Spec.Extensions.BuiltIn.PGRepack == nil { - cr.Spec.Extensions.BuiltIn.PGRepack = &f - } - - if cr.CompareVersion("2.6.0") >= 0 && cr.Spec.AutoCreateUserSchema == nil { - cr.Spec.AutoCreateUserSchema = &t + cr.Spec.Extensions.BuiltIn.PGRepack = cr.Spec.Extensions.PGRepack.Enabled } if cr.CompareVersion("2.9.0") < 0 && cr.Spec.Config == nil { @@ -464,20 +502,21 @@ func (cr *PerconaPGCluster) ToCrunchy(ctx context.Context, postgresCluster *crun postgresCluster.Spec.InstanceSets = cr.Spec.InstanceSets.ToCrunchy() postgresCluster.Spec.Proxy = cr.Spec.Proxy.ToCrunchy(cr.Spec.CRVersion) - if cr.Spec.Extensions.BuiltIn.PGStatMonitor != nil { - postgresCluster.Spec.Extensions.PGStatMonitor = *cr.Spec.Extensions.BuiltIn.PGStatMonitor + postgresCluster.Spec.Extensions.PGTDE = cr.Spec.Extensions.PGTDE + if cr.Spec.Extensions.PGStatMonitor.Enabled != nil { + postgresCluster.Spec.Extensions.PGStatMonitor = *cr.Spec.Extensions.PGStatMonitor.Enabled } - if cr.Spec.Extensions.BuiltIn.PGStatStatements != nil { - postgresCluster.Spec.Extensions.PGStatStatements = *cr.Spec.Extensions.BuiltIn.PGStatStatements + if cr.Spec.Extensions.PGStatStatements.Enabled != nil { + postgresCluster.Spec.Extensions.PGStatStatements = *cr.Spec.Extensions.PGStatStatements.Enabled } - if cr.Spec.Extensions.BuiltIn.PGAudit != nil { - postgresCluster.Spec.Extensions.PGAudit = *cr.Spec.Extensions.BuiltIn.PGAudit + if cr.Spec.Extensions.PGAudit.Enabled != nil { + postgresCluster.Spec.Extensions.PGAudit = *cr.Spec.Extensions.PGAudit.Enabled } - if cr.Spec.Extensions.BuiltIn.PGVector != nil { - postgresCluster.Spec.Extensions.PGVector = *cr.Spec.Extensions.BuiltIn.PGVector + if cr.Spec.Extensions.PGVector.Enabled != nil { + postgresCluster.Spec.Extensions.PGVector = *cr.Spec.Extensions.PGVector.Enabled } - if cr.Spec.Extensions.BuiltIn.PGRepack != nil { - postgresCluster.Spec.Extensions.PGRepack = *cr.Spec.Extensions.BuiltIn.PGRepack + if cr.Spec.Extensions.PGRepack.Enabled != nil { + postgresCluster.Spec.Extensions.PGRepack = *cr.Spec.Extensions.PGRepack.Enabled } postgresCluster.Spec.TLSOnly = cr.Spec.TLSOnly @@ -873,12 +912,27 @@ type BuiltInExtensionsSpec struct { PGRepack *bool `json:"pg_repack,omitempty"` } +type BuiltInExtensionSpec struct { + Enabled *bool `json:"enabled,omitempty"` +} + +// +kubebuilder:validation:XValidation:rule="!has(oldSelf.pg_tde) || !has(oldSelf.pg_tde.vault) || !has(oldSelf.pg_tde.enabled) || !oldSelf.pg_tde.enabled || has(self.pg_tde.vault)",message="to disable pg_tde first set enabled=false without removing vault and wait for pod restarts" type ExtensionsSpec struct { Image string `json:"image,omitempty"` ImagePullPolicy corev1.PullPolicy `json:"imagePullPolicy,omitempty"` Storage CustomExtensionsStorageSpec `json:"storage,omitempty"` - BuiltIn BuiltInExtensionsSpec `json:"builtin,omitempty"` - Custom []CustomExtensionSpec `json:"custom,omitempty"` + + // Deprecated: Use extensions. instead. This field will be removed after 2.11.0. + BuiltIn BuiltInExtensionsSpec `json:"builtin,omitempty"` + + PGStatMonitor BuiltInExtensionSpec `json:"pg_stat_monitor,omitempty"` + PGStatStatements BuiltInExtensionSpec `json:"pg_stat_statements,omitempty"` + PGAudit BuiltInExtensionSpec `json:"pg_audit,omitempty"` + PGVector BuiltInExtensionSpec `json:"pgvector,omitempty"` + PGRepack BuiltInExtensionSpec `json:"pg_repack,omitempty"` + PGTDE crunchyv1beta1.PGTDESpec `json:"pg_tde,omitempty"` + + Custom []CustomExtensionSpec `json:"custom,omitempty"` } type SecretsSpec struct { diff --git a/pkg/apis/pgv2.percona.com/v2/zz_generated.deepcopy.go b/pkg/apis/pgv2.percona.com/v2/zz_generated.deepcopy.go index fb7e40c4d4..43e36fb2a7 100644 --- a/pkg/apis/pgv2.percona.com/v2/zz_generated.deepcopy.go +++ b/pkg/apis/pgv2.percona.com/v2/zz_generated.deepcopy.go @@ -47,6 +47,26 @@ func (in *Backups) DeepCopy() *Backups { return out } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *BuiltInExtensionSpec) DeepCopyInto(out *BuiltInExtensionSpec) { + *out = *in + if in.Enabled != nil { + in, out := &in.Enabled, &out.Enabled + *out = new(bool) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BuiltInExtensionSpec. +func (in *BuiltInExtensionSpec) DeepCopy() *BuiltInExtensionSpec { + if in == nil { + return nil + } + out := new(BuiltInExtensionSpec) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *BuiltInExtensionsSpec) DeepCopyInto(out *BuiltInExtensionsSpec) { *out = *in @@ -181,6 +201,12 @@ func (in *ExtensionsSpec) DeepCopyInto(out *ExtensionsSpec) { *out = *in in.Storage.DeepCopyInto(&out.Storage) in.BuiltIn.DeepCopyInto(&out.BuiltIn) + in.PGStatMonitor.DeepCopyInto(&out.PGStatMonitor) + in.PGStatStatements.DeepCopyInto(&out.PGStatStatements) + in.PGAudit.DeepCopyInto(&out.PGAudit) + in.PGVector.DeepCopyInto(&out.PGVector) + in.PGRepack.DeepCopyInto(&out.PGRepack) + in.PGTDE.DeepCopyInto(&out.PGTDE) if in.Custom != nil { in, out := &in.Custom, &out.Custom *out = make([]CustomExtensionSpec, len(*in)) diff --git a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/postgrescluster_test.go b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/postgrescluster_test.go index 4aa3197e66..d1c91520f6 100644 --- a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/postgrescluster_test.go +++ b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/postgrescluster_test.go @@ -44,7 +44,8 @@ metadata: {} spec: backups: pgbackrest: {} - extensions: {} + extensions: + pg_tde: {} instances: null patroni: leaderLeaseDurationSeconds: 30 @@ -76,7 +77,8 @@ metadata: {} spec: backups: pgbackrest: {} - extensions: {} + extensions: + pg_tde: {} instances: - dataVolumeClaimSpec: resources: {} diff --git a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/postgrescluster_types.go b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/postgrescluster_types.go index 5413e7bedb..4ed584b4ee 100644 --- a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/postgrescluster_types.go +++ b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/postgrescluster_types.go @@ -213,12 +213,41 @@ type InitContainerSpec struct { ContainerSecurityContext *corev1.SecurityContext `json:"containerSecurityContext,omitempty"` } +type PGTDESecretObjectReference struct { + // +kubebuilder:validation:Required + Name string `json:"name"` + // +kubebuilder:validation:Required + Key string `json:"key"` +} + +type PGTDEVaultSpec struct { + // Host of Vault server. + Host string `json:"host"` + // Name of the secret that contains the access token with read and write access to the mount path. + TokenSecret PGTDESecretObjectReference `json:"tokenSecret"` + // Name of the secret that contains the CA certificate for SSL verification. + CASecret PGTDESecretObjectReference `json:"caSecret,omitempty"` + // The mount point on the Vault server where the key provider should store the keys. + // +kubebuilder:default=secret/data + MountPath string `json:"mountPath,omitempty"` +} + +// +kubebuilder:validation:XValidation:rule="!has(self.enabled) || (has(self.enabled) && self.enabled == false) || has(self.vault)",message="vault is required for enabling pg_tde" +type PGTDESpec struct { + Enabled bool `json:"enabled,omitempty"` + + Vault *PGTDEVaultSpec `json:"vault,omitempty"` +} + +// +kubebuilder:validation:XValidation:rule="!has(oldSelf.pg_tde) || !has(oldSelf.pg_tde.vault) || !has(oldSelf.pg_tde.enabled) || !oldSelf.pg_tde.enabled || has(self.pg_tde.vault)",message="to disable pg_tde first set enabled=false without removing vault and wait for pod restarts" type ExtensionsSpec struct { PGStatMonitor bool `json:"pgStatMonitor,omitempty"` PGAudit bool `json:"pgAudit,omitempty"` PGStatStatements bool `json:"pgStatStatements,omitempty"` PGVector bool `json:"pgvector,omitempty"` PGRepack bool `json:"pgRepack,omitempty"` + + PGTDE PGTDESpec `json:"pg_tde,omitempty"` } type TLSSpec struct { @@ -402,6 +431,9 @@ type PostgresClusterStatus struct { // Identifies the databases that have been installed into PostgreSQL. DatabaseRevision string `json:"databaseRevision,omitempty"` + // Identifies the pg_tde configuration that have been installed into PostgreSQL. + PGTDERevision string `json:"pgTDERevision,omitempty"` + // Current state of PostgreSQL instances. // +listType=map // +listMapKey=name @@ -475,6 +507,7 @@ const ( PostgresClusterProgressing = "Progressing" ProxyAvailable = "ProxyAvailable" Registered = "Registered" + PGTDEEnabled = "PGTDEEnabled" ) type PostgresInstanceSetSpec struct { diff --git a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/zz_generated.deepcopy.go b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/zz_generated.deepcopy.go index f78cb102aa..dc3c7d437d 100644 --- a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/zz_generated.deepcopy.go +++ b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/zz_generated.deepcopy.go @@ -435,6 +435,7 @@ func (in *ExporterSpec) DeepCopy() *ExporterSpec { // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *ExtensionsSpec) DeepCopyInto(out *ExtensionsSpec) { *out = *in + in.PGTDE.DeepCopyInto(&out.PGTDE) } // DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExtensionsSpec. @@ -1491,6 +1492,58 @@ func (in *PGMonitorSpec) DeepCopy() *PGMonitorSpec { return out } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGTDESecretObjectReference) DeepCopyInto(out *PGTDESecretObjectReference) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGTDESecretObjectReference. +func (in *PGTDESecretObjectReference) DeepCopy() *PGTDESecretObjectReference { + if in == nil { + return nil + } + out := new(PGTDESecretObjectReference) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGTDESpec) DeepCopyInto(out *PGTDESpec) { + *out = *in + if in.Vault != nil { + in, out := &in.Vault, &out.Vault + *out = new(PGTDEVaultSpec) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGTDESpec. +func (in *PGTDESpec) DeepCopy() *PGTDESpec { + if in == nil { + return nil + } + out := new(PGTDESpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGTDEVaultSpec) DeepCopyInto(out *PGTDEVaultSpec) { + *out = *in + out.TokenSecret = in.TokenSecret + out.CASecret = in.CASecret +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGTDEVaultSpec. +func (in *PGTDEVaultSpec) DeepCopy() *PGTDEVaultSpec { + if in == nil { + return nil + } + out := new(PGTDEVaultSpec) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *PGUpgrade) DeepCopyInto(out *PGUpgrade) { *out = *in @@ -2002,7 +2055,7 @@ func (in *PostgresClusterSpec) DeepCopyInto(out *PostgresClusterSpec) { *out = new(PostgresClusterAuthentication) (*in).DeepCopyInto(*out) } - out.Extensions = in.Extensions + in.Extensions.DeepCopyInto(&out.Extensions) if in.InitContainer != nil { in, out := &in.InitContainer, &out.InitContainer *out = new(InitContainerSpec)