From 8e3358575bdc09b1467c28c3bda2ce6639e36dba Mon Sep 17 00:00:00 2001 From: Zac Dover Date: Mon, 4 May 2026 20:01:03 +1000 Subject: [PATCH 1/4] Add installation instructions for Rook-Ceph $TITLE Signed-off-by: Zac Dover --- docs/architecture/rook-ceph-install.md | 426 +++++++++++++++++++++++++ 1 file changed, 426 insertions(+) create mode 100644 docs/architecture/rook-ceph-install.md diff --git a/docs/architecture/rook-ceph-install.md b/docs/architecture/rook-ceph-install.md new file mode 100644 index 0000000..827797b --- /dev/null +++ b/docs/architecture/rook-ceph-install.md @@ -0,0 +1,426 @@ +--- +title: Rook-Ceph Installation Procedure +--- + +# Installing Rook-Ceph on Kubernetes + +## Overview + +This guide provides step-by-step instructions for deploying a Ceph storage +cluster using the Rook operator on Kubernetes. Rook automates the deployment, +configuration, and management of Ceph clusters within Kubernetes environments. + +## Prerequisites + +Before beginning the installation, ensure the following requirements are met: + +### Kubernetes Cluster Requirements + +- Kubernetes v1.25 or higher +- `kubectl` configured to communicate with your cluster +- Administrator access to the Kubernetes cluster +- At least 3 worker nodes for a production cluster (1 node minimum for testing) +- Verify compatibility between your Kubernetes version and the Rook version you +  intend to deploy — see the [Rook releases page](https://github.com/rook/rook/releases) +  for version compatibility information + +### Storage Requirements + +- Raw block devices available on worker nodes (unformatted, no filesystem) +- Minimum 10 GB of storage per OSD +- Devices should not be mounted or in use by the operating system + +### Network Requirements + +- Network connectivity between all cluster nodes +- Network access between pods is handled by the Kubernetes network plugin (CNI). +  Ensure your CNI supports the required pod-to-pod communication. If you need +  to open ports for external access to Ceph services, the typical ports are +  6789, 3300, and 6800-7300. + +### System Requirements + +- Linux kernel 4.5 or higher (5.x recommended) +- LVM2 packages installed on all nodes +- Minimum 2 GB RAM per node (4 GB+ recommended) +- `helm` installed if using Helm-based deployment (optional) + +## Installation Steps + +### Step 1: Clone the Rook Repository + +Clone the Rook repository to get the deployment manifests: + +```bash +git clone --single-branch --branch release-1.14 https://github.com/rook/rook.git +cd rook/deploy/examples +``` + +**Note:** Replace `release-1.14` with the desired Rook version. Check the Rook +releases page for the latest stable version, and verify it is compatible with +your Kubernetes version before proceeding. + +### Step 2: Deploy the Rook Operator + +Install the Rook operator, which manages the Ceph cluster lifecycle: + +```bash +# Deploy common resources +kubectl create -f crds.yaml +kubectl create -f common.yaml + +# Deploy the Rook operator +kubectl create -f operator.yaml +``` + +### Step 3: Verify Operator Deployment + +Confirm the Rook operator is running: + +```bash +kubectl -n rook-ceph get pods +``` + +**Expected output:** + +``` +NAME                                  READY   STATUS    RESTARTS   AGE +rook-ceph-operator-           1/1     Running   0          30s +``` + +Wait until the operator pod shows `Running` status before proceeding. + +### Step 4: Create the Ceph Cluster + +Deploy a Ceph cluster using the cluster manifest: + +```bash +kubectl create -f cluster.yaml +``` + +This creates a basic Ceph cluster with the following default configuration: + +- 3 Mon (Monitor) daemons +- 1 Mgr (Manager) daemon +- OSDs automatically provisioned from available devices +- Dashboard enabled + +### Step 5: Monitor Cluster Deployment + +Watch the cluster deployment progress: + +```bash +kubectl -n rook-ceph get pods -w +``` + +The deployment is complete when all pods are in `Running` state. This may take +several minutes. + +**Expected pods:** + +- `rook-ceph-mon-*` - Monitor daemons (typically 3) +- `rook-ceph-mgr-*` - Manager daemons (typically 2) +- `rook-ceph-osd-*` - OSD daemons (one per device) +- `rook-ceph-crashcollector-*` - Crash collectors (one per node) + +### Step 6: Verify Cluster Health + +Check the Ceph cluster health: + +```bash +# Deploy the Rook toolbox for cluster management +kubectl create -f toolbox.yaml + +# Wait for toolbox to be ready +kubectl -n rook-ceph rollout status deployment/rook-ceph-tools + +# Check cluster status +kubectl -n rook-ceph exec -it deployment/rook-ceph-tools -- ceph status +``` + +**Healthy cluster output should show:** + +``` +cluster: +  id:     +  health: HEALTH_OK + +services: +  mon: 3 daemons, quorum a,b,c +  mgr: a(active), standbys: b +  osd: X osds: X up, X in +``` + +## Configuration Options + +### Customizing the Cluster + +Edit `cluster.yaml` to customize your deployment before creating the cluster: + +#### Storage Configuration + +Specify which devices to use for OSDs: + +```yaml +storage: +  useAllNodes: true +  useAllDevices: false +  deviceFilter: "^sd[b-z]"  # Use sdb, sdc, etc. +``` + +Or specify devices explicitly: + +```yaml +storage: +  nodes: +  - name: "node1" +    devices: +    - name: "/dev/sdb" +  - name: "node2" +    devices: +    - name: "/dev/sdc" +``` + +#### Resource Limits + +Set resource limits for Ceph daemons: + +```yaml +resources: +  mon: +    limits: +      cpu: "2000m" +      memory: "4Gi" +    requests: +      cpu: "1000m" +      memory: "2Gi" +  osd: +    limits: +      cpu: "2000m" +      memory: "4Gi" +    requests: +      cpu: "1000m" +      memory: "2Gi" +``` + +#### Network Configuration + +Configure network settings for client and cluster traffic: + +```yaml +network: +  provider: host  # or multus for advanced networking +  # Uncomment for dual network configuration +  # connections: +  #   encryption: +  #     enabled: true +``` + +### Dashboard Access + +Enable and access the Ceph dashboard: + +```bash +# The dashboard is enabled by default in cluster.yaml + +# Get the dashboard password +kubectl -n rook-ceph get secret rook-ceph-dashboard-password \ +  -o jsonpath="{['data']['password']}" | base64 --decode && echo + +# Port-forward to access the dashboard +kubectl -n rook-ceph port-forward service/rook-ceph-mgr-dashboard 8443:8443 +``` + +Access the dashboard at: `https://localhost:8443` + +Username: `admin` +Password: (from the command above) + +## Creating Storage Classes + +### Block Storage (RBD) + +Create a storage class for block devices: + +```bash +kubectl create -f csi/rbd/storageclass.yaml +``` + +Test the storage class: + +```bash +# Create a test PVC +cat < Date: Mon, 4 May 2026 20:10:25 +1000 Subject: [PATCH 2/4] Add Rook-Ceph installation procedure Add a Rook-Ceph installation procedure to the Cobaltcore documentation. Signed-off-by: Zac Dover --- docs/architecture/index.md | 3 +++ docs/architecture/rook-ceph-install.md | 3 +++ 2 files changed, 6 insertions(+) diff --git a/docs/architecture/index.md b/docs/architecture/index.md index 5d44602..92435c8 100644 --- a/docs/architecture/index.md +++ b/docs/architecture/index.md @@ -14,3 +14,6 @@ CobaltCore is built on top of OpenStack and IronCore, leveraging their capabilit - [**HA Service**](./cluster#ha-service): The high availability service that ensures critical workloads remain operational even in the event of failures. - [**Cortex**](./cortex): Smart initial placement and scheduling service for compute, storage, and network in cloud-native cloud environments. - [**Ceph**](./ceph): An all-in-one storage system that provides object, block, and file storage and delivers extraordinary scalability. +- [**Rook-Ceph Installation**](./rook-ceph-install): A procedure for deploying + the all-in-one storage system that provides object, block, and file storage +and delivers extraordinary scalability. diff --git a/docs/architecture/rook-ceph-install.md b/docs/architecture/rook-ceph-install.md index 827797b..d1dbd52 100644 --- a/docs/architecture/rook-ceph-install.md +++ b/docs/architecture/rook-ceph-install.md @@ -10,6 +10,9 @@ This guide provides step-by-step instructions for deploying a Ceph storage cluster using the Rook operator on Kubernetes. Rook automates the deployment, configuration, and management of Ceph clusters within Kubernetes environments. +The instructions here are meant only as a general guideline. We recommend that you use the instructions found in the [official Rook documentation](https://rook.io/docs/rook/latest/) and the [upstream Ceph documentation](https://docs.ceph.com/). + + ## Prerequisites Before beginning the installation, ensure the following requirements are met: From 7121070b6708b61dd17a418ed7c2c2bc100a2c73 Mon Sep 17 00:00:00 2001 From: Zac Dover Date: Wed, 6 May 2026 03:31:05 +1000 Subject: [PATCH 3/4] fixup Signed-off-by: Zac Dover --- docs/architecture/rook-ceph-install.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/docs/architecture/rook-ceph-install.md b/docs/architecture/rook-ceph-install.md index d1dbd52..deb128f 100644 --- a/docs/architecture/rook-ceph-install.md +++ b/docs/architecture/rook-ceph-install.md @@ -10,7 +10,10 @@ This guide provides step-by-step instructions for deploying a Ceph storage cluster using the Rook operator on Kubernetes. Rook automates the deployment, configuration, and management of Ceph clusters within Kubernetes environments. -The instructions here are meant only as a general guideline. We recommend that you use the instructions found in the [official Rook documentation](https://rook.io/docs/rook/latest/) and the [upstream Ceph documentation](https://docs.ceph.com/). +The instructions here are meant only as a general guideline. We recommend that +you use the instructions found in the [official Rook +documentation](https://rook.io/docs/rook/latest/) and the [upstream Ceph +documentation](https://docs.ceph.com/). ## Prerequisites From 41c0e02433e8f9d39fc252fe244319022ed9e42d Mon Sep 17 00:00:00 2001 From: Zac Dover Date: Thu, 30 Apr 2026 01:35:45 +1000 Subject: [PATCH 4/4] s/CobaltChore/CobaltCore/ $TITLE Signed-off-by: Zac Dover --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 02115d7..c7a04a8 100644 --- a/README.md +++ b/README.md @@ -8,7 +8,7 @@ ## About this project -This is the official CobaltChore documentation. +This is the official CobaltCore documentation. ## Requirements and Setup