Skip to content

Include helm charts to install Kruize on minikube#4

Merged
chandrams merged 8 commits intokruize:mvp_demofrom
chandrams:helm_charts_minikube
Apr 7, 2026
Merged

Include helm charts to install Kruize on minikube#4
chandrams merged 8 commits intokruize:mvp_demofrom
chandrams:helm_charts_minikube

Conversation

@chandrams
Copy link
Copy Markdown
Contributor

@chandrams chandrams commented Feb 27, 2026

Include helm charts to install Kruize on minikube

This is on top of #2

Summary by Sourcery

Add a Helm chart for deploying Kruize and its database/UI components, including Kubernetes manifests, configuration, and storage for running on standard clusters and minikube.

New Features:

  • Introduce a configurable Helm chart for deploying the Kruize backend, PostgreSQL database, and UI proxy on Kubernetes clusters.
  • Provide minikube-specific Helm values to simplify local Kruize installation with adjusted resources, storage, and RBAC.
  • Expose Kruize metrics and services via Service, ServiceMonitor, and optional NetworkPolicy resources for integration with Prometheus.

Enhancements:

  • Template cron jobs, configuration maps, RBAC, storage, and health-check resources to fully manage Kruize lifecycle and database maintenance within Helm.
  • Document chart usage and configuration options, including example overrides, in a dedicated Helm README.

Documentation:

  • Add user-facing documentation describing how to install, configure, and uninstall the Kruize Helm chart, including parameter tables and examples.

@chandrams chandrams self-assigned this Feb 27, 2026
@sourcery-ai
Copy link
Copy Markdown

sourcery-ai Bot commented Feb 27, 2026

Reviewer's Guide

Adds a new Helm application chart that deploys the Kruize core, Postgres backing DB, and UI (via an nginx pod) with configurable values and a minikube-specific override file, including RBAC, storage, cron jobs, monitoring, and documentation.

Flow diagram for Helm installation with minikube-specific values override

flowchart TB
  user[Developer]
  helm_install[helm_install_kruize]
  values_default[values_yaml_defaults]
  values_minikube[values_minikube_yaml_overrides]
  rendered_manifests[Rendered_Kubernetes_manifests]
  api_server[Kubernetes_API_server]
  kruize_ns[Namespace_kruize]

  user --> helm_install
  helm_install --> values_default
  helm_install --> values_minikube
  values_default --> rendered_manifests
  values_minikube --> rendered_manifests

  rendered_manifests --> api_server
  api_server --> kruize_ns

  subgraph kruize_ns_cluster[Namespace_kruize_resources]
    deploy_kruize[Deployment_kruize]
    svc_kruize[Service_kruize]
    deploy_db[Deployment_kruize_db]
    svc_db[Service_kruize_db_service]
    pv_db[PersistentVolume_kruize_db]
    pvc_db[PersistentVolumeClaim_kruize_db]
    pod_ui_nginx[Pod_kruize_ui_nginx]
    svc_ui_nginx[Service_kruize_ui_nginx]
    config_kruize[ConfigMap_kruize_config]
    config_nginx[ConfigMap_nginx_conf]
    cron_create[CronJob_create_partition]
    cron_delete[CronJob_delete_partition]
    sa[ServiceAccount_kruize]
    rbac_objects[ClusterRoles_and_ClusterRoleBindings]
    sm[ServiceMonitor_kruize]
    np[NetworkPolicy_to_prometheus]
  end

  kruize_ns --> deploy_kruize
  kruize_ns --> svc_kruize
  kruize_ns --> deploy_db
  kruize_ns --> svc_db
  kruize_ns --> pv_db
  kruize_ns --> pvc_db
  kruize_ns --> pod_ui_nginx
  kruize_ns --> svc_ui_nginx
  kruize_ns --> config_kruize
  kruize_ns --> config_nginx
  kruize_ns --> cron_create
  kruize_ns --> cron_delete
  kruize_ns --> sa
  kruize_ns --> rbac_objects
  kruize_ns --> sm
  kruize_ns --> np
Loading

File-Level Changes

Change Details Files
Introduce a Helm application chart that defines the Kruize core deployment, Service, and configuration wiring.
  • Create Chart.yaml with metadata, appVersion 0.8.1, and Kubernetes compatibility requirements
  • Define helper templates for names, labels, service account, and versioned deployment name in _helpers.tpl
  • Add kruize_deployment.yaml to deploy the Kruize container with config-map volume, env vars from values, optional resource requests/limits, and a DB readiness initContainer
  • Expose Kruize via kruize_service.yaml with NodePort/ClusterIP determined by values and Prometheus scrape annotations
  • Add a basic helm test Pod in templates/tests/test-connection.yaml that wget’s the Kruize service
charts/kruize/Chart.yaml
charts/kruize/templates/_helpers.tpl
charts/kruize/templates/kruize_deployment.yaml
charts/kruize/templates/kruize_service.yaml
charts/kruize/templates/tests/test-connection.yaml
Provide configuration via ConfigMaps and values to parameterize Kruize, Hibernate, logging, datasources, and DB connectivity.
  • Create values.yaml with structured sections for Kruize image, env, config (Hibernate, logging, datasources), DB image and PVC, UI image, monitoring, RBAC, and networkPolicy toggles
  • Create configmap_kruize.yaml that renders dbconfigjson and kruizeconfigjson using .Values, including dynamic URLs and iterating over datasource entries
  • Add configmap_nginx.yaml to provide an nginx.conf that reverse-proxies /api to the Kruize service and serves the UI SPA
  • Add top-level README.md pointer to the chart README and a detailed charts/kruize/README.md documenting configuration options
charts/kruize/values.yaml
charts/kruize/templates/configmap_kruize.yaml
charts/kruize/templates/configmap_nginx.yaml
charts/kruize/README.md
README.md
Add a Postgres backing database deployment with persistent storage and wiring to Kruize.
  • Create kruize_db_deployment.yaml to run Postgres with credentials and DB name from values and a PVC-backed data volume
  • Add kruize_db_service.yaml to expose the DB via a ClusterIP service referenced by Kruize and the initContainer
  • Define storage_pv.yaml and storage_pvc.yaml to create a hostPath-backed PersistentVolume and matching PVC using values for storageClass, size, and hostPath
  • Add storage_manual.yaml to create a manual StorageClass if one does not already exist using lookup()
charts/kruize/templates/kruize_db_deployment.yaml
charts/kruize/templates/kruize_db_service.yaml
charts/kruize/templates/storage_pv.yaml
charts/kruize/templates/storage_pvc.yaml
charts/kruize/templates/storage_manual.yaml
Introduce minikube-specific overrides for lighter resources, local storage, and simplified monitoring and RBAC.
  • Add values-minikube.yaml that disables most resource requests/limits for Kruize and DB, switches k8sType to minikube, and reduces Hibernate pool sizes
  • Adjust datasource configuration in the minikube values to target a single prometheus-k8s instance in the monitoring namespace without auth token paths
  • Override DB volume mount path, PGDATA, PVC size, hostPath, access mode, and reclaimPolicy for local minikube storage
  • Disable OpenShift-specific RBAC and serviceAccount creation and enable a NetworkPolicy and monitoring with fixed scrape interval for minikube
charts/kruize/values-minikube.yaml
Add RBAC, service account, cron jobs, and monitoring integration required by Kruize.
  • Create service_account.yaml to optionally create a dedicated service account driven by values.serviceAccount
  • Define multiple ClusterRoles in role.yaml for recommendation updates, editing workloads, and accessing instaslices CRDs
  • Bind those roles to the service account in rolebinding.yaml, with conditional bindings for cluster-monitoring-view and OpenShift SCC when rbac.create is true
  • Create cronjobs.yaml defining create-partition and delete-partition CronJobs that reuse the Kruize image, mount the config map, and read env from values (including deletePartitionsThreshold)
  • Add service_monitor.yaml to create a ServiceMonitor for Prometheus Operator and network_policy.yaml to restrict Prometheus access when enabled
charts/kruize/templates/service_account.yaml
charts/kruize/templates/role.yaml
charts/kruize/templates/rolebinding.yaml
charts/kruize/templates/cronjobs.yaml
charts/kruize/templates/service_monitor.yaml
charts/kruize/templates/network_policy.yaml
Deploy the Kruize UI via an nginx pod and service using the UI image and shared nginx configuration.
  • Add kruize_ui_nginx_pod.yaml that runs the UI container image, mounts nginx.conf from the nginx ConfigMap, and sets KRUIZE_UI_ENV
  • Expose the UI nginx pod via kruize_ui_nginx_service.yaml with a configurable Service type and port from kruizeUI values
charts/kruize/templates/kruize_ui_nginx_pod.yaml
charts/kruize/templates/kruize_ui_nginx_service.yaml
Add licensing and legal boilerplate for the chart.
  • Add root LICENSE file and charts/kruize/LICENSE file (content not shown in diff) associated with the chart
LICENSE
charts/kruize/LICENSE

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Copy Markdown

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 4 issues, and left some high level feedback:

  • In templates/network_policy.yaml, the podSelector and ingress.from selectors don’t match the labels applied by the Kruize deployment and Prometheus is typically in a separate namespace (monitoring/openshift-monitoring), so this NetworkPolicy is unlikely to ever apply as intended; consider aligning labels and namespace selection with the actual Kruize and Prometheus resources.
  • The PVC in templates/storage_pvc.yaml hardcodes accessModes: [ReadWriteMany], ignoring .Values.db.pvc.accessModes, which will conflict with the PV created in storage_pv.yaml (e.g., ReadWriteOnce in values-minikube.yaml); use the values-driven accessModes for the PVC as well.
  • The UI is deployed as a standalone Pod in kruize_ui_nginx_pod.yaml, which will not be automatically rescheduled on failure or support scaling; consider switching this to a Deployment to match typical Helm and Kubernetes operational patterns.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `templates/network_policy.yaml`, the `podSelector` and `ingress.from` selectors don’t match the labels applied by the Kruize deployment and Prometheus is typically in a separate namespace (`monitoring`/`openshift-monitoring`), so this NetworkPolicy is unlikely to ever apply as intended; consider aligning labels and namespace selection with the actual Kruize and Prometheus resources.
- The PVC in `templates/storage_pvc.yaml` hardcodes `accessModes: [ReadWriteMany]`, ignoring `.Values.db.pvc.accessModes`, which will conflict with the PV created in `storage_pv.yaml` (e.g., `ReadWriteOnce` in `values-minikube.yaml`); use the values-driven accessModes for the PVC as well.
- The UI is deployed as a standalone Pod in `kruize_ui_nginx_pod.yaml`, which will not be automatically rescheduled on failure or support scaling; consider switching this to a Deployment to match typical Helm and Kubernetes operational patterns.

## Individual Comments

### Comment 1
<location path="charts/kruize/templates/network_policy.yaml" line_range="12-13" />
<code_context>
+  labels:
+    {{- include "kruize.labels" . | nindent 4 }}
+spec:
+  podSelector:
+    matchLabels:
+      app.kubernetes.io/name: prometheus
+  policyTypes:
</code_context>
<issue_to_address>
**issue (bug_risk):** NetworkPolicy `from` selector uses `app: kruize`, which does not match the deployment labels.

Since the Deployment only defines `app.kubernetes.io/name` and `app.kubernetes.io/instance` via `kruize.selectorLabels`, the `from` selector with `app: kruize` will never match, unintentionally blocking all ingress. Update the selector to use `app.kubernetes.io/name: {{ include "kruize.name" . }}` (and instance if appropriate), or add a matching `app: kruize` label to the pods.
</issue_to_address>

### Comment 2
<location path="charts/kruize/templates/configmap_nginx.yaml" line_range="12-13" />
<code_context>
+  nginx.conf: |
+    events {}
+    http {
+      upstream kruize-api {
+        server {{ $fullName }}:8080;
+      }
+
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Nginx upstream is hardcoded to port 8080 instead of using the configurable Kruize service port.

If `.Values.kruize.service.port` is changed, the Service/Deployment will follow it but this Nginx config will still route to `:8080`, breaking the proxy. Please use the templated service port here (e.g., `server {{ $fullName }}:{{ .Values.kruize.service.port }};`) to keep it consistent.

Suggested implementation:

```
      upstream kruize-api {
        server {{ $fullName }}:{{ .Values.kruize.service.port }};
      }

```

If there are multiple `upstream kruize-api` blocks (the file snippet shows some duplication), apply the same replacement to each occurrence of `server {{ $fullName }}:8080;` so all Nginx upstreams stay consistent with `.Values.kruize.service.port`.
</issue_to_address>

### Comment 3
<location path="charts/kruize/templates/cronjobs.yaml" line_range="34-35" />
<code_context>
+          containers:
+            - name: kruize-cron-create
+              image: {{ $image }}
+              imagePullPolicy: IfNotPresent
+              volumeMounts:
+                - name: config-volume
</code_context>
<issue_to_address>
**suggestion (bug_risk):** CronJob imagePullPolicy is hardcoded and ignores the configured Kruize image pull policy.

The main Kruize deployment already uses `.Values.kruize.image.pullPolicy`, but this CronJob hardcodes `imagePullPolicy: IfNotPresent`. That means user-configured policies (e.g., `Always`) are ignored and new tags may not be pulled. Please template this field from `.Values.kruize.image.pullPolicy` for consistency with the main deployment.

```suggestion
              image: {{ $image }}
              imagePullPolicy: {{ .Values.kruize.image.pullPolicy }}
```
</issue_to_address>

### Comment 4
<location path="charts/kruize/README.md" line_range="107-109" />
<code_context>
+| `db.pvc.storageSize` | Storage size for database PVC | `500Mi` |
+| `db.pvc.hostPath` | Host path for database storage | `/mnt/data` |
+| `db.user` | User for Kruize DB container | `admin` |
+| `db.password` | Password for Kruize DB container | `admin` |
+| `db.adminUser` | Admin user for Kruize DB container | `admin` |
+| `db.adminPassword` | Admin password for Kruize DB container | `admin` |
+| `db.name` | Name of the Kruize DB | `kruizeDB` |
+| `db.sslMode` | SSL mode for database connection | `require` |
</code_context>
<issue_to_address>
**🚨 issue (security):** Document that the default `admin/admin` database credentials are for non-production use and should be overridden.

These four values all default to `admin`. Please add a brief note near this table stating that these credentials are for development/testing only and must be changed for any production deployment to avoid insecure setups.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread charts/kruize/templates/network_policy.yaml
Comment on lines +12 to +13
upstream kruize-api {
server {{ $fullName }}:8080;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Nginx upstream is hardcoded to port 8080 instead of using the configurable Kruize service port.

If .Values.kruize.service.port is changed, the Service/Deployment will follow it but this Nginx config will still route to :8080, breaking the proxy. Please use the templated service port here (e.g., server {{ $fullName }}:{{ .Values.kruize.service.port }};) to keep it consistent.

Suggested implementation:

      upstream kruize-api {
        server {{ $fullName }}:{{ .Values.kruize.service.port }};
      }

If there are multiple upstream kruize-api blocks (the file snippet shows some duplication), apply the same replacement to each occurrence of server {{ $fullName }}:8080; so all Nginx upstreams stay consistent with .Values.kruize.service.port.

Comment thread charts/kruize/templates/cronjobs.yaml Outdated
Comment thread charts/kruize/README.md
Comment on lines +107 to +109
| `db.password` | Password for Kruize DB container | `admin` |
| `db.adminUser` | Admin user for Kruize DB container | `admin` |
| `db.adminPassword` | Admin password for Kruize DB container | `admin` |
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚨 issue (security): Document that the default admin/admin database credentials are for non-production use and should be overridden.

These four values all default to admin. Please add a brief note near this table stating that these credentials are for development/testing only and must be changed for any production deployment to avoid insecure setups.

@chandrams chandrams force-pushed the helm_charts_minikube branch from 2ad21cf to 7fbf235 Compare February 27, 2026 12:51
@chandrams
Copy link
Copy Markdown
Contributor Author

@sourcery-ai review

Copy link
Copy Markdown

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 2 issues, and left some high level feedback:

  • The network_policy.yaml currently defines a policy that targets pods labeled app.kubernetes.io/name: prometheus in the release namespace, which likely does not contain Prometheus; if the intent is to restrict or allow traffic to/from the Kruize pods, consider selecting Kruize pods instead and/or using a namespaceSelector to reference the Prometheus namespace.
  • The storage_manual.yaml template unconditionally creates a cluster‑scoped manual StorageClass when it does not exist, which can be surprising/undesirable in shared clusters; consider guarding this behind a value (e.g. db.pvc.createStorageClass) so that operators can opt in to creating a StorageClass.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The `network_policy.yaml` currently defines a policy that targets pods labeled `app.kubernetes.io/name: prometheus` in the release namespace, which likely does not contain Prometheus; if the intent is to restrict or allow traffic to/from the Kruize pods, consider selecting Kruize pods instead and/or using a `namespaceSelector` to reference the Prometheus namespace.
- The `storage_manual.yaml` template unconditionally creates a cluster‑scoped `manual` StorageClass when it does not exist, which can be surprising/undesirable in shared clusters; consider guarding this behind a value (e.g. `db.pvc.createStorageClass`) so that operators can opt in to creating a StorageClass.

## Individual Comments

### Comment 1
<location path="charts/kruize/templates/role.yaml" line_range="4-13" />
<code_context>
+      - "*"
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+  name: {{ $fullName }}-edit-ko
+rules:
+  - apiGroups: [ "apps" ]
+    resources: [ "deployments", "statefulsets", "daemonsets" ]
+    verbs: [ "get", "list", "patch", "update" ]
+  - apiGroups: [ "batch" ]
+    resources: [ "jobs" ]
+    verbs: [ "get", "list", "create", "delete" ]
+  - apiGroups: [ "" ]
+    resources: [ "namespaces" ]
+    verbs: [ "get", "list" ]
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+  name: {{ $fullName }}-instaslices-access
+rules:
+  - apiGroups: [ "inference.redhat.com" ]
+    resources: [ "instaslices" ]
+    verbs: [ "get", "list", "watch" ]
</code_context>
<issue_to_address>
**🚨 suggestion (security):** ClusterRoles grant very broad `*` permissions (including on pods and clusterrolebindings), which is more than the chart likely needs.

This role currently has `verbs: ['*']` on `pods`, `customresourcedefinitions`, `verticalpodautoscalers*`, and `clusterrolebindings`, which is excessive for an app component. Please scope it down to the specific verbs and resources actually required to limit blast radius and ease adoption in locked‑down clusters.
</issue_to_address>

### Comment 2
<location path="charts/kruize/templates/kruize_ui_nginx_pod.yaml" line_range="4" />
<code_context>
+{{- $fullName := include "kruize.fullname" . -}}
+---
+apiVersion: v1
+kind: Pod
+metadata:
+  name: {{ $fullName }}-ui-nginx-pod
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Using a bare Pod for the UI nginx component limits resiliency and makes upgrades less smooth.

Defining this as a standalone Pod means it won’t be recreated if the node is drained or the Pod is deleted, and it complicates rolling upgrades via Helm. Unless there’s a strong reason to avoid a controller, consider using a Deployment with labels matching the Service selector to improve resiliency and upgradeability.

Suggested implementation:

```
{{- $fullName := include "kruize.fullname" . -}}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ $fullName }}-ui-nginx
  namespace: {{ .Release.Namespace }}
  labels:
    app: kruize-ui-nginx
spec:
  replicas: {{ .Values.kruizeUI.replicaCount | default 1 }}
  selector:
    matchLabels:
      app: kruize-ui-nginx
  template:
    metadata:
      labels:
        app: kruize-ui-nginx
    spec:
      containers:
        - name: kruize-ui-nginx-container
          image: "{{ .Values.kruizeUI.image.repository }}:{{ .Values.kruizeUI.image.tag }}"
          imagePullPolicy: {{ .Values.kruizeUI.image.pullPolicy }}
          env:
            - name: KRUIZE_UI_ENV

```

1. Ensure `.Values.kruizeUI.replicaCount` exists in `values.yaml` (or is added) if not already present; otherwise adjust/remove the `replicas` line.
2. Confirm that any Service targeting this component uses `selector: { app: kruize-ui-nginx }` so it matches the Deployment's Pod template labels.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +4 to +13
kind: ClusterRole
metadata:
name: {{ $fullName }}-recommendation-updater
rules:
- apiGroups:
- ""
resources:
- pods
- customresourcedefinitions
verbs:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚨 suggestion (security): ClusterRoles grant very broad * permissions (including on pods and clusterrolebindings), which is more than the chart likely needs.

This role currently has verbs: ['*'] on pods, customresourcedefinitions, verticalpodautoscalers*, and clusterrolebindings, which is excessive for an app component. Please scope it down to the specific verbs and resources actually required to limit blast radius and ease adoption in locked‑down clusters.

{{- $fullName := include "kruize.fullname" . -}}
---
apiVersion: v1
kind: Pod
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Using a bare Pod for the UI nginx component limits resiliency and makes upgrades less smooth.

Defining this as a standalone Pod means it won’t be recreated if the node is drained or the Pod is deleted, and it complicates rolling upgrades via Helm. Unless there’s a strong reason to avoid a controller, consider using a Deployment with labels matching the Service selector to improve resiliency and upgradeability.

Suggested implementation:

{{- $fullName := include "kruize.fullname" . -}}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ $fullName }}-ui-nginx
  namespace: {{ .Release.Namespace }}
  labels:
    app: kruize-ui-nginx
spec:
  replicas: {{ .Values.kruizeUI.replicaCount | default 1 }}
  selector:
    matchLabels:
      app: kruize-ui-nginx
  template:
    metadata:
      labels:
        app: kruize-ui-nginx
    spec:
      containers:
        - name: kruize-ui-nginx-container
          image: "{{ .Values.kruizeUI.image.repository }}:{{ .Values.kruizeUI.image.tag }}"
          imagePullPolicy: {{ .Values.kruizeUI.image.pullPolicy }}
          env:
            - name: KRUIZE_UI_ENV

  1. Ensure .Values.kruizeUI.replicaCount exists in values.yaml (or is added) if not already present; otherwise adjust/remove the replicas line.
  2. Confirm that any Service targeting this component uses selector: { app: kruize-ui-nginx } so it matches the Deployment's Pod template labels.

cpu: ""

pvc:
storageSize: "1Gi"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Deploying Kruize helm in minikube still uses PV and PVC values of Openshift

$ kubectl describe pvc kruize-db-pv-claim -n monitoring
Name:          kruize-db-pv-claim
Namespace:     monitoring
StorageClass:  manual
Status:        Bound
Volume:        kruize-db-pv-volume
Labels:        app=kruize-db
               app.kubernetes.io/managed-by=Helm
Annotations:   meta.helm.sh/release-name: kruize
               meta.helm.sh/release-namespace: monitoring
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      500Mi
Access Modes:  RWX
VolumeMode:    Filesystem
Used By:       kruize-db-6b68d577bc-nbpvk
Events:        <none>
$ kubectl describe pv kruize-db-pv-volume -n monitoring
Name:            kruize-db-pv-volume
Labels:          app=kruize-db
                 app.kubernetes.io/instance=kruize
                 app.kubernetes.io/managed-by=Helm
                 app.kubernetes.io/name=kruize
                 app.kubernetes.io/version=0.8.1
                 helm.sh/chart=kruize-0.1.0
                 type=local
Annotations:     meta.helm.sh/release-name: kruize
                 meta.helm.sh/release-namespace: monitoring
                 pv.kubernetes.io/bound-by-controller: yes
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    manual
Status:          Bound
Claim:           monitoring/kruize-db-pv-claim
Reclaim Policy:  Retain
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        500Mi
Node Affinity:   <none>
Message:         
Source:
    Type:          HostPath (bare host directory volume)
    Path:          /mnt/data/kruize
    HostPathType:  
Events:            <none>

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Passing minikube values overrides the Openshift defaults

helm install kruize ./charts/kruize -f ./charts/kruize/values-minikube.yaml --namespace monitoring

$ kubectl describe pv kruize-db-pv-volume -n monitoring
Name:            kruize-db-pv-volume
Labels:          app=kruize-db
                 app.kubernetes.io/instance=kruize
                 app.kubernetes.io/managed-by=Helm
                 app.kubernetes.io/name=kruize
                 app.kubernetes.io/version=0.8.1
                 helm.sh/chart=kruize-0.1.0
                 type=local
Annotations:     meta.helm.sh/release-name: kruize
                 meta.helm.sh/release-namespace: monitoring
                 pv.kubernetes.io/bound-by-controller: yes
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    manual
Status:          Bound
Claim:           monitoring/kruize-db-pv-claim
Reclaim Policy:  Retain
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        1Gi
Node Affinity:   <none>
Message:         
Source:
    Type:          HostPath (bare host directory volume)
    Path:          /data/postgres/kruize
    HostPathType:  
Events:            <none>

Comment thread charts/kruize/README.md Outdated

## Prerequisites

- Kubernetes 1.19+ or OpenShift 4.x+
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please document Prometheus as a pre-requisite for minikube/kind clusters

Copy link
Copy Markdown

@shreyabiradar07 shreyabiradar07 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Signed-off-by: Chandrakala Subramanyam <csubrama@redhat.com>
Signed-off-by: Chandrakala Subramanyam <csubrama@redhat.com>
Signed-off-by: Chandrakala Subramanyam <csubrama@redhat.com>
Signed-off-by: Chandrakala Subramanyam <csubrama@redhat.com>
Signed-off-by: Chandrakala Subramanyam <csubrama@redhat.com>
Signed-off-by: Chandrakala Subramanyam <csubrama@redhat.com>
@chandrams chandrams force-pushed the helm_charts_minikube branch from 187c742 to cb3374e Compare March 30, 2026 09:14
@chandrams
Copy link
Copy Markdown
Contributor Author

@mbvreddy Can you please review this PR

template:
metadata:
labels:
{{- include "kruize.selectorLabels" . | nindent 8 }}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indentation is not proper. Not sure if it cause any problem.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

name: {{ $fullName }}-recommendation-updater
---
{{- if .Values.rbac.create -}}
{{- if .Values.rbac.create }}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will have document with

---
---

Though it is technically correct, ensure that we use single document separator.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Signed-off-by: Chandrakala Subramanyam <csubrama@ibm.com>
Signed-off-by: Chandrakala Subramanyam <csubrama@ibm.com>
@chandrams chandrams requested a review from dinogun April 2, 2026 08:46
@chandrams chandrams merged commit 2486199 into kruize:mvp_demo Apr 7, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants