A collection of resources and scripts to quickly deploy an RHDH instance in Kubernetes, preloaded with useful plugins, example entities, and third-party integrations for rapid testing and development.
These scripts were created out of a need to quickly spin up an RHDH instance preconfigured for testing. The goal is to streamline the process of deploying Red Hat Developer Hub with preconfigured plugins and supporting resources, making it easy for team members, especially those working on plugins, to verify changes, report bugs, and explore features in a real environment. It also serves as a practical example of how RHDH and its plugins can be integrated and showcased.
The RHDH Start-Up Script is designed to launch an RHDH instance with a selected set of plugins and supporting resources. Its goals are:
- To reduce the manual work of configuring plugins and their integration points
- To create an environment well-suited for various testing scenarios
- To demonstrate the capabilities of RHDH and its bundled plugins
It also deploys companion resources that are registered into the catalog, making it useful as a standalone demo environment.
These scripts are tailored for OpenShift clusters, ideally short-lived or non-production environments like:
- OpenShift Local (CRC)
- Cluster Bot clusters
They may work on other Kubernetes platforms, but this hasn't been tested or verified. At the time of writing, most of my testing has been performed using Cluster bot.
Important Running this script will install several components and operators, included Red Hat SSO, Advance Cluster Management, OpenShift GitOps, and more. It will also create service accounts and other cluster-level resources. For this reason, it is strongly recommended to use a disposable or non-critical cluster. Before running the scripts, you should review the resources being deployed. Additional documentation is provided to walk through the steps taken for setting up specific plugin configurations.
This project includes a collection of scripts and Kubernetes resources designed to streamline the deployment and testing of an RHDH instance. The structure and tooling are intended to offer both an out-of-the-box experience and the flexibility to customize plugin configurations.
The main setup script (start.sh) is used to:
- Launch an RHDH instance
- Deploy a variety of preconfigured Kubernetes resources that integrate with RHDH
- Optionally configure additional plugins and integrations
By default, the script is designed to work "out of the box" on first run, creating a working RHDH environment with minimal effort. Follow-up runs can be used to customize the setup further or to selectively configure only specific components. For users who prefer full control from the beginning, it's also possible to bypass the initial setup and tailor the configuration to specific needs. This versatility is intentional and core to the project's design.
The teardown script (teardown.sh) is used to:
- Clean up the RHDH instance and all associated resources
- Free up compute resources for redeploying a different plugin set or environment configuration
This script removes nearly everything created by the setup process, except for the namespace itself, allowing for quick redeployment into the same logical space.
A set of Kubernetes resources are included to support demoing various RHDH plugins. These resources are:
- Preconfigured for ease of use
- Intended to simplify setup for plugin features like GitOps, SSO, etc.
Some additional configuration may be required for smooth deployment, which is detailed in the plugin specific documentation
The /auth directory is intended to store sensitive configuration files used by various plugins, an
example being the GitHub App credentials used for authentication and catalog ingestion. While these
values could be set via environment variables, storing them in a centralized directory:
- Keeps credential files easy to reference and manage
- Supports manual plugin configuration outside of the automated script flow
More details about this directory and how to populate it are provided in the plugin specific documentation.
This project uses the upstream RHDH dynamic-plugins.default.yaml directly, keeping configuration
in sync with official releases.
A GitHub Action (sync-dynmaic-plugins.yaml) automatically:
- Fetches the latest
dynamic-plugins.default.yamlfrom upstream RHDH - Generates a ConfigMap version (
dynamic-plugins.default.yaml) - Creates a PR when changes are detected
The sync runs weekly and can be triggered manually via workflow_dispatch
| File | Purpose |
|---|---|
resources/rhdh/dynamic-plugins.default.yaml |
Synced from upstream RHDH (do not edit) |
resources/rhdh/dynamic-plugins-configmap.yaml |
Generated ConfigMap for deployment |
If you need to regenerate the ConfigMap after manually updating the default file:
./scripts/generate-configmap.sh [namespace]`There are three ways to run the testbed scripts, depending on your environment and preferences:
| Method | Best For | Requirements |
|---|---|---|
| Local | Development, quick iterations, full control | oc, helm installed locally |
| Docker Compose | macOS users, isolated environment, no local tool setup | Docker Desktop |
| Kubernetes Job | CI/CD pipelines, fully automated, no local tools needed | Cluster access only |
These scripts are designed to work out-of-the-box with minimal setup. In fact, it's recommended that you start with the default setup to better understand how everything fits together. You can always customize and extend things later.
ocOpenShift CLIhelm
Step 1. Fork and clone this repo:
git clone https://github.com/PatAKnight/rhdh-testbed.git
cd rhdh-testbedStep 2. Ensure access to an OpenShift Cluster:
- These scripts rely on
ocandhelmto manage the resources for you, so an OpenShift cluster is a must.
Important: Most testing has been done using Cluster Bot
Step 3. Configure Your Environment by creating you local .env file by copying the provided sample:
cp .env.sample .envStep 4. Open the .env and set at least the following three values obtained from your cluster:
K8S_CLUSTER_TOKEN=<your-cluster-token>
K8S_CLUSTER_URL=<your-cluster-api-url>
K8S_CLUSTER_NAME="test-cluster"
SIGN_IN_PAGE="guest"Step 5. Run the script:
./start.shStep 6. Access your RHDH instance:
- Once deployment is complete, navigate to the exposed route for RHDH in your OpenShift cluster (this is typically displayed in the script output).
You'll now have a clean, working instance of RHDH that's ready to be enhanced in the next steps
If you prefer containerized execution, use this instead of the local installation above:
- Prereqs: Docker,
.envconfigured (see Steps 3–4 from local installation). - First time setup:
docker compose build
docker compose up rhdh-start- Teardown when done:
docker compose up rhdh-teardownFor fully automated, hands-off deployment directly on your cluster without any local tooling requirements, you can deploy the testbed as a Kubernetes Job.
- Access to an OpenShift cluster with
cluster-adminor equivalent permissions ocorkubectlCLI (only needed to apply the manifests)
A pre-built container image is available at ghcr.io/PatAKnight/rhdh-testbed:latest.
Step 1. Create the deployment namespace:
oc new-project rhdh-testbed
Step 2a. Configure the deployment by editing deploy/configmap.yaml:
Key ConfigMap values:
| Variable | Description | Default |
|---|---|---|
| NAMESPACE | Namespace where RHDH will be deployed | rhdh |
| RELEASE_NAME | Helm release name | backstage |
| K8S_CLUSTER_NAME | Name for you cluster in RHDH | my-cluster |
| SIGN_IN_PAGE | Authentication method (guest or oidc) | guest |
Note: Plugin enablement is configured via the dynamic plugins ConfigMap (see Step 2c below), not through environment variables
Step 2b. Create your secret file using deploy/secret-template.yaml as an example:
# Step 2b: Create and apply your secret
cp deploy/secret-template.yaml deploy/secret.local.yaml
# Edit with your values THEN APPLY
oc apply -f deploy/secret.local.yaml -n rhdh-testbedStep 2c. Configure dynamic plugins:
The testbed detects which plugins you've enabled in your dynamic-plugins-configmap.yaml and
automatically deploys any required cluster resources (operators, CRDs, etc.).
Option A: Use the default configuration (simplest)
If you don't need custom plugins, the pre-included configuration works out of the box:
# Create ConfigMap from the default dynamic plugins config
oc create configmap rhdh-dynamic-plugins \
--from-file=dynamic-plugins.yaml=resources/rhdh/dynamic-plugins-configmap.yaml \
-n rhdh-testbedOption B: Customize which plugins are enabled
For more control over which plugins and operators are deployed:
# Copy the default config
cp resources/rhdh/dynamic-plugins-configmap.yaml my-plugins-config.yaml
# Edit to enable/disable plugins by setting disabled: false/true
# For example, to enable Keycloak SSO:
# - package: ./dynamic-plugins/dist/backstage-community-plugin-catalog-backend-module-keycloak-dynamic
# disabled: false # Change from true to false
# Create ConfigMap from your customized config
oc create configmap rhdh-dynamic-plugins \
--from-file=dynamic-plugins.yaml=my-plugins-config.yaml \
-n rhdh-testbedPlugins that trigger cluster resource deployment:
| Plugin Pattern | What Gets Deployed |
|---|---|
| plugin-catalog-backend-module-keycloak | Red Hat SSO Operator + Keycloak Realm |
| plugin-tekton | OpenShift Pipelines Operator |
| plugin-ocm / plugin-ocm-backend | Advanced Cluster Management Operator |
| plugin-3scale-backend | 3scale Operator + API Manager |
| plugin-kubernetes / plugin-topology | ServiceAccount token configuration |
Work in Progress: Not all plugins require cluster resources, and not all plugins that do require resources have been integrated into the automation scripts yet. If you enable a plugin that needs additional setup (like an operator), check the
docs/folder for manual configuration steps or open an issue if support is missing.
Step 3. Apply the deployment resources:
# Apply ServiceAccount, ClusterRole, ClusterRoleBinding, ConfigMap, and Secret
oc apply -k deploy/
# Verify the dynamic plugins ConfigMap was created (from Step 2c)
oc get configmap rhdh-dynamic-plugins -n rhdh-testbed
# Start the setup job
oc apply -f deploy/job.yamlNote: The Job mounts the
rhdh-dynamic-pluginsConfigMap to/app/resources/user-resources/dynamic-plugins-configmap.local.yaml. The setup script reads this to determine which cluster resources to deploy.
Optional Step Monitor the deployment:
# Watch the job status
oc get jobs -n rhdh-testbed -w
#View logs
oc logs -f job/rhdh-testbed-setup -n rhdh-testbedStep 4. Access your RHDH instance:
Once the job completes, the RHDH route URL will be displayed in the logs.
To clean up the RHDH deployment
# Delete the setup job first
oc delete job rhdh-testbed-setup -n rhdh-testbed
# Run the teardown job
oc apply -f deploy/teardown-job.yaml
# Option: Watch teardown progress
oc logs -f job/rhdh-testbed-teardown -n rhdh-testbedIf you want to customize the scripts or use your own container registry:
Step 1. Build the image:
docker build -t your-registry/rhdh-testbed:latest .
Step 2. Push to your registry:
docker push your-registry/rhdh-testbed:latest
Step 3. Update the job manifests to user your image:
# Edit deploy/job.yaml and deploy/teardown-job.yaml
# Change the image reference:
# image: ghcr.io/pataknight/rhdh-testbed:latest
# To:
# image: your-registry/rhdh-testbed:latestAlternatively, use kustomize to override the image:
cd deploy
kustomize edit set image ghcr.io/pataknight/rhdh-testbed:latest=your-registry/rhdh-testbed:v1.0.0
oc apply -k .If you prefer to build the image directly in OpenShift instead of using the pre-built image from ghcr.io:
Step 1. Create the namespace and apply the BuildConfig:
oc new-project rhdh-testbed
oc apply -f deploy/build-config.yamlStep 2. Start the build and wait for completion:
oc start-build rhdh-testbed -n rhdh-testbed --follow
Step 3. Apply the remaining resources and use the internal registry job:
oc apply -k deploy/
oc apply -f deploy/job-internal-registry.yamlTo clean up the RHDH deployment
# Delete the setup job first
oc delete job rhdh-testbed-setup -n rhdh-testbed
# Run the teardown job
oc apply -f deploy/teardown-job-internal-registry.yaml
# Option: Watch teardown progress
oc logs -f job/rhdh-testbed-teardown -n rhdh-testbed**Benefits of building in-cluster:
- No external registry access required
- Full visibility into build process and logs
- Easy to customize by forking the repo and updating the BuildConfig git URL
- Image stays within your cluster's trust boundary
To use you own fork:
Edit deploy/buildconfig.yaml and change the git URI:
spec:
source:
git:
uri: https://github.com/YOUR-USERNAME/rhdh-testbed.git
ref: main # or your branchThe Kubernetes Job requires elevated permissions to:
- Create namespaces and projects
- Install Operators via OLM
- Create ClusterRoles and ClusterRoleBindings
- Deploy various workloads and CRDs
This tool is designed for disposable, non-production clusters. The included ClusterRole grants
broad permissions necessary for the automation. Always review deploy/cluster-role.yaml before
applying. The Job uses a dedicated ServiceAccount (rhdh-testbed-runner) that is scoped to only
what's necessary for the deployment automation.
So, you now have a running RHDH (Red Hat Developer Hub) instance, great! But this base setup is just the foundation. To transform it into a useful demo or testing environment, here are some next steps to take:
During setup, a number of editable resources are created under resources/user-resources/. These
are designed for customization and extension.
resources/user-resources/app-config- Stores application configurationresources/user-resources/rbac-policy- Contains RBAC policy definitions used to manage user permissions.resources/user-resources/rhdh-secrets- Holds required secrets like credentials and tokens Note: This file is auto-generated, avoid editing manually unless necessary. Changes could be overwritten.resources/user-resources/dynamic-plugins- Controls which dynamic plugins are enabled and their configurations.
The testbed uses the standard RHDH `dynamic-plugins.default.yaml format for plugin configuration. To enable a plugin:
-
Edit the dynamic plugins config file:
- Local runs:
resources/user-resources/dynamic-plugins-configmap.local.yaml - Kubernetes Job: Create a ConfigMap (see Option 3)
- Local runs:
-
Set
disabled: falseon the plugin(s) you want:- package: ./dynamic-plugins/dist/backstage-community-plugin-tekton disabled: false # Change from true to false
-
Re-run the setup to apply changes:
- Local:
./start.sh - Docker:
docker compose up rhdh-start - Kubernetes Job: Delete the existing job and recreate it
- Local:
Each plugin includes demo guidance and sample scenarios to help showcase its features in a meaningful way.
- Plugin documentation within this project includes a Demo section
- Many plugins and integrations include sample data, configurations, or actions that simulate real-world usage.
- These examples are designed to create a richer, interactive experience for testing and presentation.
- There are a number of resources included in this project, I was aiming to make it require as
little configuration as possible, as such it isn't recommended to make changes to k8s resources
themselves. Ideally, just the
user-resourcesshould be updated (minus therhdh-secrets). - A number of credentials are meant to be stored to make access easier, included is a
.gitIgnorethat will ignore any yaml files containing.local.yamlto hopefully prevent accidental leaks, still be mindful of anything that you add to this project.- This also doubles in that if you wish to contribute any changes / enhancements to the project, you do not have to worry about reverting / removing credentials and secrets
- Following up from that, contributions are welcome. This was definitely a learning experience for me which translates to there are more than likely much better (and probably simpler) ways to accomplish what I did.