The NDB operator brings automated and simplified database administration, provisioning, and life-cycle management to Kubernetes.
- Access to an NDB Server.
- A Kubernetes cluster to run against, which should have network connectivity to the NDB server. The operator will automatically use the current context in your kubeconfig file (i.e. whatever cluster
kubectl cluster-infoshows). - The operator-sdk installed.
- A clone of the source code (this repository).
- Cert-manager (only when running in non OpenShift clusters). Follow the instructions here.
With the pre-requisites completed, the NDB Operator can be deployed in one of the following ways:
Runs the controller outside the Kubernetes cluster as a process, but installs the CRDs, services and RBAC entities within the Kubernetes cluster. Generally used while development (without running webhooks):
make install runRuns the controller pod, installs the CRDs, services and RBAC entities within the Kubernetes cluster. Used to run the operator from the container image defined in the Makefile. Make sure that the cert-manager is installed if not using OpenShift.
make deployThe Helm charts for the NDB Operator project are available on artifacthub.io and can be installed by following the instructions here.
To deploy the operator from this repository on an OpenShift cluster, create a bundle and then install the operator via the operator-sdk.
# Export these environment variables to overwrite the variables set in the Makefile
export DOCKER_USERNAME=dockerhub-username
export VERSION=x.y.z
export IMG=docker.io/$DOCKER_USERNAME/ndb-operator:v$VERSION
export BUNDLE_IMG=docker.io/$DOCKER_USERNAME/ndb-operator-bundle:v$VERSION
# Build and push the container image to the container registry
make docker-build docker-push
# Build the bundle following the prompts for input, build and push the bundle image to the container registry
make bundle bundle-build bundle-push
# Install the operator (run on the OpenShift cluster)
operator-sdk run bundle $BUNDLE_IMG
NOTE:
The container and bundle image creation steps can be skipped if existing images are present in the container registry.NDBServer and credentials: The operator uses two custom resources—NDBServer (cluster-scoped) and Database (namespaced). NDBServer is cluster-scoped so that admins can store the NDB API credential secret in a restricted namespace (e.g. ndb-credentials) and set credentialSecretRef to point to it. Developers who create Database resources only need to reference the NDBServer by name in ndbRef (e.g. ndbRef: ndb); they can list and use cluster-scoped NDBServers without needing access to the secret's namespace.
- NDB API credential secret: Create this in a restricted namespace (e.g.
ndb-credentials) so only admins need access. Create that namespace if it does not exist, then apply the secret there. - Database instance secret: Create this in the same namespace where you will create the Database resource (e.g. your application namespace).
apiVersion: v1
kind: Secret
metadata:
name: ndb-secret-name
namespace: ndb-credentials # use a restricted namespace; create it first
type: Opaque
stringData:
username: username-for-ndb-server
password: password-for-ndb-server
ca_certificate: |
-----BEGIN CERTIFICATE-----
CA CERTIFICATE (ca_certificate is optional)
-----END CERTIFICATE-----
---
apiVersion: v1
kind: Secret
metadata:
name: db-instance-secret-name
# no namespace, or set to the namespace where you will create the Database
type: Opaque
stringData:
password: password-for-the-database-instance
ssh_public_key: SSH-PUBLIC-KEY
Create the NDB credential namespace, then apply the secrets (the NDB secret YAML above includes namespace: ndb-credentials):
kubectl create namespace ndb-credentials
kubectl apply -f <path/to/secrets-manifest.yaml>NDBServer is cluster-scoped. Admins create the NDB API credential secret in a restricted namespace (e.g. ndb-credentials) and set credentialSecretRef to that secret. The NDBServer resource itself has no namespace; developers in any namespace can reference it by name.
apiVersion: ndb.nutanix.com/v1alpha1
kind: NDBServer
metadata:
labels:
app.kubernetes.io/name: ndbserver
app.kubernetes.io/instance: ndbserver
app.kubernetes.io/part-of: ndb-operator
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: ndb-operator
name: ndb
# no namespace: NDBServer is cluster-scoped
spec:
# Reference to the secret that holds the credentials for NDB (username, password, ca_certificate).
# Point to the restricted namespace where the secret was created; developers do not need access to this namespace.
credentialSecretRef:
name: ndb-secret-name
namespace: ndb-credentials
# NDB Server's API URL
server: https://[NDB IP]:8443/era/v0.9
# Set to true to skip SSL certificate validation, should be false if ca_certificate is provided in the credential secret.
skipCertificateVerification: true
Create the NDBServer resource using:
kubectl apply -f <path/to/NDBServer-manifest.yaml>Create a Database Resource. A database can either be provisioned or cloned on NDB based on the inputs specified in the database manifest.
apiVersion: ndb.nutanix.com/v1alpha1
kind: Database
metadata:
# This name that will be used within the kubernetes cluster
name: db
spec:
# Name of the cluster-scoped NDBServer (no namespace needed; developers reference by name only)
ndbRef: ndb
isClone: false
# Database instance specific details (that is to be provisioned)
databaseInstance:
# Cluster Name or cluster ID where the Database has to be provisioned
# Can be fetched from the GET /clusters endpoint
clusterName: "Nutanix Cluster Name" # Recommended: Use cluster name
# clusterId: "Nutanix Cluster UUID" # Alternative: Use cluster UUID
# The database instance name on NDB
name: "Database-Instance-Name"
# The description of the database instance
description: Database Description
# Names of the databases on that instance
databaseNames:
- database_one
- database_two
- database_three
# Credentials secret name for NDB installation
# data: password, ssh_public_key
credentialSecret: db-instance-secret-name
size: 10
timezone: "UTC"
type: postgres
# You can specify any (or none) of these types of profiles: compute, software, network, dbParam
# If not specified, the corresponding Out-of-Box (OOB) profile will be used wherever applicable
# Name is case-sensitive. ID is the UUID of the profile. Profile should be in the "READY" state
# "id" & "name" are optional. If none provided, OOB may be resolved to any profile of that type
profiles:
compute:
id: ""
name: ""
# A Software profile is a mandatory input for closed-source engines: SQL Server & Oracle
software:
name: ""
id: ""
network:
id: ""
name: ""
dbParam:
name: ""
id: ""
# Only applicable for MSSQL databases
dbParamInstance:
name: ""
id: ""
timeMachine: # Optional block, if removed the SLA defaults to NONE
sla : "NAME OF THE SLA"
dailySnapshotTime: "12:34:56" # Time for daily snapshot in hh:mm:ss format
snapshotsPerDay: 4 # Number of snapshots per day
logCatchUpFrequency: 90 # Frequency (in minutes)
weeklySnapshotDay: "WEDNESDAY" # Day of the week for weekly snapshot
monthlySnapshotDay: 24 # Day of the month for monthly snapshot
quarterlySnapshotMonth: "Jan" # Start month of the quarterly snapshot
additionalArguments: # Optional block, can specify additional arguments that are unique to database engines.
listener_port: "8080"
apiVersion: ndb.nutanix.com/v1alpha1
kind: Database
metadata:
# This name that will be used within the kubernetes cluster
name: db
spec:
# Name of the cluster-scoped NDBServer (no namespace needed; developers reference by name only)
ndbRef: ndb
isClone: true
# Clone specific details (that is to be provisioned)
clone:
# Type of the database to be cloned
type: postgres
# The clone instance name on NDB
name: "Clone-Instance-Name"
# The description of the clone instance
description: Database Description
# Cluster Name or Cluster id of the cluster where the Cloned Database has to be provisioned
# Can be fetched from the GET /clusters endpoint
clusterName: "Nutanix Cluster Name" # Recommended: Use cluster name
# clusterId: "Nutanix Cluster UUID" # Alternative: Use cluster UUID
# You can specify any (or none) of these types of profiles: compute, software, network, dbParam
# If not specified, the corresponding Out-of-Box (OOB) profile will be used wherever applicable
# Name is case-sensitive. ID is the UUID of the profile. Profile should be in the "READY" state
# "id" & "name" are optional. If none provided, OOB may be resolved to any profile of that type
profiles:
compute:
id: ""
name: ""
# A Software profile is a mandatory input for closed-source engines: SQL Server & Oracle
software:
name: ""
id: ""
network:
id: ""
name: ""
dbParam:
name: ""
id: ""
# Only applicable for MSSQL databases
dbParamInstance:
name: ""
id: ""
# Name of the secret with the
# data: password, ssh_public_key
credentialSecret: clone-instance-secret-name
timezone: "UTC"
# Name or ID of the database to clone from, can be fetched from NDB REST API Explorer
sourceDatabaseName: "source-database-name" # Recommended: Use database name
# sourceDatabaseId: "source-database-uuid" # Alternative: Use database UUID
# Name or ID of the snapshot to clone from, can be fetched from NDB REST API Explorer
snapshotName: "snapshot-name" # Recommended: Use snapshot name, or leave empty for latest
# snapshotId: "snapshot-uuid" # Alternative: Use snapshot UUID
additionalArguments: # Optional block, can specify additional arguments that are unique to database engines.
expireInDays: 3
Create the Database resource:
kubectl apply -f <path/to/database-manifest.yaml>Below are the various optional addtionalArguments you can specify along with examples of their corresponding values. Arguments that have defaults will be indicated.
Provisioning Additional Arguments:
# PostGres
additionalArguments:
listener_port: "1111" # Default: "5432"
# MySQL
additionalArguments:
listener_port: "1111" # Default: "3306"
# MongoDB
additionalArguments:
listener_port: "1111" # Default: "27017"
log_size: "150" # Default: "100"
journal_size: "150" # Default: "100"
# MSSQL
additionalArguments:
sql_user_name: "mazin" # Defualt: "sa".
authentication_mode: "mixed" # Default: "windows". Options are "windows" or "mixed". Must specify sql_user.
server_collation: "<server-collation>" # Default: "SQL_Latin1_General_CP1_CI_AS".
database_collation: "<server-collation>" # Default: "SQL_Latin1_General_CP1_CI_AS".
dbParameterProfileIdInstance: "<id-instance>" # Default: Fetched from profile.
vm_dbserver_admin_password: "<admin-password>" # Default: Fetched from database secret.
sql_user_password: "<sq-user-password>" # NO Default. Must specify authentication_mode as "mixed".
windows_domain_profile_id: <domain-profile-id> # NO Default. Must specify vm_db_server_user.
vm_db_server_user: <vm-db-server-use> # NO Default. Must specify windows_domain_profile_id.
vm_win_license_key: <licenseKey> # NO Default.Cloning Additional Arguments:
MSSQL:
windows_domain_profile_id
era_worker_service_user
sql_service_startup_account
vm_win_license_key
target_mountpoints_location
expireInDays
expiryDateTimezone
deleteDatabase
refreshInDays
refreshTime
refreshDateTimezone
MongoDB:
expireInDays
expiryDateTimezone
deleteDatabase
refreshInDays
refreshTime
refreshDateTimezone
Postgres:
expireInDays
expiryDateTimezone
deleteDatabase
refreshInDays
refreshTime
refreshDateTimezone
MySQL:
expireInDays
expiryDateTimezone
deleteDatabase
refreshInDays
refreshTime
refreshDateTimezone To deregister the database and delete the VM run:
kubectl delete -f <path/to/database-manifest.yaml>To deregister the database and delete the VM run:
kubectl delete -f <path/to/NDBServer-manifest.yaml>In 0.5.3, NDBServer is cluster-scoped and uses credentialSecretRef (secret name + namespace) instead of credentialSecret (name only). There is no in-place conversion; you must perform a one-time migration before or as part of the upgrade.
Important: The Kubernetes API does not allow changing a CRD from namespaced to cluster-scoped while custom resources of that kind exist. You must delete all existing NDBServer resources (after backing them up) before installing the 0.5.3 CRD.
Record the name and spec of each NDBServer so you can recreate them. For example:
kubectl get ndbserver -A -o yaml > ndbserver-backup.yamlNote which namespace each NDBServer was in and which secret name it used (spec.credentialSecret).
Create a dedicated namespace for NDB API credentials and ensure the secret exists there (create it or copy from the old namespace):
kubectl create namespace ndb-credentials
# If the secret was in another namespace (e.g. default), copy it:
kubectl get secret <your-ndb-secret-name> -n <old-namespace> -o yaml | \
sed 's/namespace: .*/namespace: ndb-credentials/' | kubectl apply -f -
# Or create a new secret in ndb-credentials with your NDB API credentials.Use a single secret name (e.g. ndb-api-secret) that you will reference in the new NDBServer.
You must delete all NDBServer resources before upgrading, or the CRD update (namespaced → cluster-scoped) will be rejected.
kubectl delete ndbserver --all -ADatabase resources can stay; they will reconcile again once the new cluster-scoped NDBServer exists and ndbRef matches its name.
Install or deploy the ndb-operator 0.5.3 release (manifests, Helm, or OLM). This installs the updated CRD with scope: Cluster and the new operator image.
Create one NDBServer per logical NDB server. Use the same metadata.name as before if you want existing Database resources (which reference NDBServer by name in spec.ndbRef) to work without changes.
NOTE
kubectl delete crds ndbservers.ndb.nutanix.comExample:
apiVersion: ndb.nutanix.com/v1alpha1
kind: NDBServer
metadata:
name: ndb # same name as before so existing Databases need no change
spec:
server: "https://<ndb-server>:8443/era/v0.9"
credentialSecretRef:
name: ndb-api-secret
namespace: ndb-credentialsApply with kubectl apply -f <file>. No namespace in metadata (cluster-scoped).
Ensure each Database has spec.ndbRef set to the cluster-scoped NDBServer’s name (e.g. ndb). If you used the same NDBServer name as before, no change is needed. Check reconciliation:
kubectl get databases -A
kubectl get ndbserver| Step | Action |
|---|---|
| 1 | Back up NDBServer(s) (kubectl get ndbserver -A -o yaml) |
| 2 | Create ndb-credentials namespace and put/copy NDB API secret there |
| 3 | Delete all namespaced NDBServer(s) (kubectl delete ndbserver --all -A) |
| 4 | Install ndb-operator 0.5.3 |
| 5 | Create new cluster-scoped NDBServer(s) with credentialSecretRef and same name(s) as before |
| 6 | Confirm Databases reconcile (same ndbRef if you kept the same NDBServer name) |
If you are on an older version where NDBServer was namespaced, the same conceptual migration applies: NDBServer is now cluster-scoped and uses credentialSecretRef (name + namespace) instead of credentialSecret. Follow the steps in Upgrading from 0.5.2 to 0.5.3 above.
If you are editing the API definitions, generate the manifests such as CRs or CRDs using:
make generate manifestsAdd the CRDs to the Kubernetes cluster
make installRun your controller locally (this will run in the foreground, so switch to a new terminal if you want to leave it running):
make runNOTES:
- You can also run this in one step by running:
make install run - Run
make --helpfor more information on all potentialmaketargets
More information can be found via the Kubebuilder Documentation
Build and push your image to the location specified by IMG:
make docker-build docker-push IMG=<some-registry>/ndb-operator:tagDeploy the controller to the cluster with the image specified by IMG:
make deploy IMG=<some-registry>/ndb-operator:tagUninstall the operator based on the installation/deployment environment
# Stops the controller process
ctrl + c
# Uninstalls the CRDs
make uninstall# Removes the deployment, crds, services and rbac entities
make undeploy# NAME: name of the release created during installation
helm uninstall NAMEoperator-sdk cleanup ndb-operator --delete-allThis project aims to follow the Kubernetes Operator pattern. It uses Controllers which provides a reconcile function responsible for synchronizing resources until the desired state is reached on the cluster.
A custom resource of the kind Database is created by the reconciler, followed by a Service and an Endpoint that maps to the IP address of the database instance provisioned. Application pods/deployments can use this service to interact with the databases provisioned on NDB through the native Kubernetes service.
Pods can specify an initContainer to wait for the service (and hence the database instance) to get created before they start up.
initContainers:
- name: init-db
image: busybox:1.28
command: ['sh', '-c', "until nslookup <<Database CR Name>>-svc.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for database service; sleep 2; done"]See the contributing docs
This code is developed in the open with input from the community through issues and PRs. A Nutanix engineering team serves as the maintainer. Documentation is available in the project repository. Issues and enhancement requests can be submitted in the Issues tab of this repository. Please search for and review the existing open issues before submitting a new issue.
Copyright 2022-2023 Nutanix, Inc.
The project is released under version 2.0 of the Apache license.