Include helm charts to install Kruize on minikube#4
Include helm charts to install Kruize on minikube#4chandrams merged 8 commits intokruize:mvp_demofrom
Conversation
Reviewer's GuideAdds a new Helm application chart that deploys the Kruize core, Postgres backing DB, and UI (via an nginx pod) with configurable values and a minikube-specific override file, including RBAC, storage, cron jobs, monitoring, and documentation. Flow diagram for Helm installation with minikube-specific values overrideflowchart TB
user[Developer]
helm_install[helm_install_kruize]
values_default[values_yaml_defaults]
values_minikube[values_minikube_yaml_overrides]
rendered_manifests[Rendered_Kubernetes_manifests]
api_server[Kubernetes_API_server]
kruize_ns[Namespace_kruize]
user --> helm_install
helm_install --> values_default
helm_install --> values_minikube
values_default --> rendered_manifests
values_minikube --> rendered_manifests
rendered_manifests --> api_server
api_server --> kruize_ns
subgraph kruize_ns_cluster[Namespace_kruize_resources]
deploy_kruize[Deployment_kruize]
svc_kruize[Service_kruize]
deploy_db[Deployment_kruize_db]
svc_db[Service_kruize_db_service]
pv_db[PersistentVolume_kruize_db]
pvc_db[PersistentVolumeClaim_kruize_db]
pod_ui_nginx[Pod_kruize_ui_nginx]
svc_ui_nginx[Service_kruize_ui_nginx]
config_kruize[ConfigMap_kruize_config]
config_nginx[ConfigMap_nginx_conf]
cron_create[CronJob_create_partition]
cron_delete[CronJob_delete_partition]
sa[ServiceAccount_kruize]
rbac_objects[ClusterRoles_and_ClusterRoleBindings]
sm[ServiceMonitor_kruize]
np[NetworkPolicy_to_prometheus]
end
kruize_ns --> deploy_kruize
kruize_ns --> svc_kruize
kruize_ns --> deploy_db
kruize_ns --> svc_db
kruize_ns --> pv_db
kruize_ns --> pvc_db
kruize_ns --> pod_ui_nginx
kruize_ns --> svc_ui_nginx
kruize_ns --> config_kruize
kruize_ns --> config_nginx
kruize_ns --> cron_create
kruize_ns --> cron_delete
kruize_ns --> sa
kruize_ns --> rbac_objects
kruize_ns --> sm
kruize_ns --> np
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Hey - I've found 4 issues, and left some high level feedback:
- In
templates/network_policy.yaml, thepodSelectorandingress.fromselectors don’t match the labels applied by the Kruize deployment and Prometheus is typically in a separate namespace (monitoring/openshift-monitoring), so this NetworkPolicy is unlikely to ever apply as intended; consider aligning labels and namespace selection with the actual Kruize and Prometheus resources. - The PVC in
templates/storage_pvc.yamlhardcodesaccessModes: [ReadWriteMany], ignoring.Values.db.pvc.accessModes, which will conflict with the PV created instorage_pv.yaml(e.g.,ReadWriteOnceinvalues-minikube.yaml); use the values-driven accessModes for the PVC as well. - The UI is deployed as a standalone Pod in
kruize_ui_nginx_pod.yaml, which will not be automatically rescheduled on failure or support scaling; consider switching this to a Deployment to match typical Helm and Kubernetes operational patterns.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `templates/network_policy.yaml`, the `podSelector` and `ingress.from` selectors don’t match the labels applied by the Kruize deployment and Prometheus is typically in a separate namespace (`monitoring`/`openshift-monitoring`), so this NetworkPolicy is unlikely to ever apply as intended; consider aligning labels and namespace selection with the actual Kruize and Prometheus resources.
- The PVC in `templates/storage_pvc.yaml` hardcodes `accessModes: [ReadWriteMany]`, ignoring `.Values.db.pvc.accessModes`, which will conflict with the PV created in `storage_pv.yaml` (e.g., `ReadWriteOnce` in `values-minikube.yaml`); use the values-driven accessModes for the PVC as well.
- The UI is deployed as a standalone Pod in `kruize_ui_nginx_pod.yaml`, which will not be automatically rescheduled on failure or support scaling; consider switching this to a Deployment to match typical Helm and Kubernetes operational patterns.
## Individual Comments
### Comment 1
<location path="charts/kruize/templates/network_policy.yaml" line_range="12-13" />
<code_context>
+ labels:
+ {{- include "kruize.labels" . | nindent 4 }}
+spec:
+ podSelector:
+ matchLabels:
+ app.kubernetes.io/name: prometheus
+ policyTypes:
</code_context>
<issue_to_address>
**issue (bug_risk):** NetworkPolicy `from` selector uses `app: kruize`, which does not match the deployment labels.
Since the Deployment only defines `app.kubernetes.io/name` and `app.kubernetes.io/instance` via `kruize.selectorLabels`, the `from` selector with `app: kruize` will never match, unintentionally blocking all ingress. Update the selector to use `app.kubernetes.io/name: {{ include "kruize.name" . }}` (and instance if appropriate), or add a matching `app: kruize` label to the pods.
</issue_to_address>
### Comment 2
<location path="charts/kruize/templates/configmap_nginx.yaml" line_range="12-13" />
<code_context>
+ nginx.conf: |
+ events {}
+ http {
+ upstream kruize-api {
+ server {{ $fullName }}:8080;
+ }
+
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Nginx upstream is hardcoded to port 8080 instead of using the configurable Kruize service port.
If `.Values.kruize.service.port` is changed, the Service/Deployment will follow it but this Nginx config will still route to `:8080`, breaking the proxy. Please use the templated service port here (e.g., `server {{ $fullName }}:{{ .Values.kruize.service.port }};`) to keep it consistent.
Suggested implementation:
```
upstream kruize-api {
server {{ $fullName }}:{{ .Values.kruize.service.port }};
}
```
If there are multiple `upstream kruize-api` blocks (the file snippet shows some duplication), apply the same replacement to each occurrence of `server {{ $fullName }}:8080;` so all Nginx upstreams stay consistent with `.Values.kruize.service.port`.
</issue_to_address>
### Comment 3
<location path="charts/kruize/templates/cronjobs.yaml" line_range="34-35" />
<code_context>
+ containers:
+ - name: kruize-cron-create
+ image: {{ $image }}
+ imagePullPolicy: IfNotPresent
+ volumeMounts:
+ - name: config-volume
</code_context>
<issue_to_address>
**suggestion (bug_risk):** CronJob imagePullPolicy is hardcoded and ignores the configured Kruize image pull policy.
The main Kruize deployment already uses `.Values.kruize.image.pullPolicy`, but this CronJob hardcodes `imagePullPolicy: IfNotPresent`. That means user-configured policies (e.g., `Always`) are ignored and new tags may not be pulled. Please template this field from `.Values.kruize.image.pullPolicy` for consistency with the main deployment.
```suggestion
image: {{ $image }}
imagePullPolicy: {{ .Values.kruize.image.pullPolicy }}
```
</issue_to_address>
### Comment 4
<location path="charts/kruize/README.md" line_range="107-109" />
<code_context>
+| `db.pvc.storageSize` | Storage size for database PVC | `500Mi` |
+| `db.pvc.hostPath` | Host path for database storage | `/mnt/data` |
+| `db.user` | User for Kruize DB container | `admin` |
+| `db.password` | Password for Kruize DB container | `admin` |
+| `db.adminUser` | Admin user for Kruize DB container | `admin` |
+| `db.adminPassword` | Admin password for Kruize DB container | `admin` |
+| `db.name` | Name of the Kruize DB | `kruizeDB` |
+| `db.sslMode` | SSL mode for database connection | `require` |
</code_context>
<issue_to_address>
**🚨 issue (security):** Document that the default `admin/admin` database credentials are for non-production use and should be overridden.
These four values all default to `admin`. Please add a brief note near this table stating that these credentials are for development/testing only and must be changed for any production deployment to avoid insecure setups.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| upstream kruize-api { | ||
| server {{ $fullName }}:8080; |
There was a problem hiding this comment.
suggestion (bug_risk): Nginx upstream is hardcoded to port 8080 instead of using the configurable Kruize service port.
If .Values.kruize.service.port is changed, the Service/Deployment will follow it but this Nginx config will still route to :8080, breaking the proxy. Please use the templated service port here (e.g., server {{ $fullName }}:{{ .Values.kruize.service.port }};) to keep it consistent.
Suggested implementation:
upstream kruize-api {
server {{ $fullName }}:{{ .Values.kruize.service.port }};
}
If there are multiple upstream kruize-api blocks (the file snippet shows some duplication), apply the same replacement to each occurrence of server {{ $fullName }}:8080; so all Nginx upstreams stay consistent with .Values.kruize.service.port.
| | `db.password` | Password for Kruize DB container | `admin` | | ||
| | `db.adminUser` | Admin user for Kruize DB container | `admin` | | ||
| | `db.adminPassword` | Admin password for Kruize DB container | `admin` | |
There was a problem hiding this comment.
🚨 issue (security): Document that the default admin/admin database credentials are for non-production use and should be overridden.
These four values all default to admin. Please add a brief note near this table stating that these credentials are for development/testing only and must be changed for any production deployment to avoid insecure setups.
2ad21cf to
7fbf235
Compare
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey - I've found 2 issues, and left some high level feedback:
- The
network_policy.yamlcurrently defines a policy that targets pods labeledapp.kubernetes.io/name: prometheusin the release namespace, which likely does not contain Prometheus; if the intent is to restrict or allow traffic to/from the Kruize pods, consider selecting Kruize pods instead and/or using anamespaceSelectorto reference the Prometheus namespace. - The
storage_manual.yamltemplate unconditionally creates a cluster‑scopedmanualStorageClass when it does not exist, which can be surprising/undesirable in shared clusters; consider guarding this behind a value (e.g.db.pvc.createStorageClass) so that operators can opt in to creating a StorageClass.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The `network_policy.yaml` currently defines a policy that targets pods labeled `app.kubernetes.io/name: prometheus` in the release namespace, which likely does not contain Prometheus; if the intent is to restrict or allow traffic to/from the Kruize pods, consider selecting Kruize pods instead and/or using a `namespaceSelector` to reference the Prometheus namespace.
- The `storage_manual.yaml` template unconditionally creates a cluster‑scoped `manual` StorageClass when it does not exist, which can be surprising/undesirable in shared clusters; consider guarding this behind a value (e.g. `db.pvc.createStorageClass`) so that operators can opt in to creating a StorageClass.
## Individual Comments
### Comment 1
<location path="charts/kruize/templates/role.yaml" line_range="4-13" />
<code_context>
+ - "*"
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: {{ $fullName }}-edit-ko
+rules:
+ - apiGroups: [ "apps" ]
+ resources: [ "deployments", "statefulsets", "daemonsets" ]
+ verbs: [ "get", "list", "patch", "update" ]
+ - apiGroups: [ "batch" ]
+ resources: [ "jobs" ]
+ verbs: [ "get", "list", "create", "delete" ]
+ - apiGroups: [ "" ]
+ resources: [ "namespaces" ]
+ verbs: [ "get", "list" ]
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: {{ $fullName }}-instaslices-access
+rules:
+ - apiGroups: [ "inference.redhat.com" ]
+ resources: [ "instaslices" ]
+ verbs: [ "get", "list", "watch" ]
</code_context>
<issue_to_address>
**🚨 suggestion (security):** ClusterRoles grant very broad `*` permissions (including on pods and clusterrolebindings), which is more than the chart likely needs.
This role currently has `verbs: ['*']` on `pods`, `customresourcedefinitions`, `verticalpodautoscalers*`, and `clusterrolebindings`, which is excessive for an app component. Please scope it down to the specific verbs and resources actually required to limit blast radius and ease adoption in locked‑down clusters.
</issue_to_address>
### Comment 2
<location path="charts/kruize/templates/kruize_ui_nginx_pod.yaml" line_range="4" />
<code_context>
+{{- $fullName := include "kruize.fullname" . -}}
+---
+apiVersion: v1
+kind: Pod
+metadata:
+ name: {{ $fullName }}-ui-nginx-pod
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Using a bare Pod for the UI nginx component limits resiliency and makes upgrades less smooth.
Defining this as a standalone Pod means it won’t be recreated if the node is drained or the Pod is deleted, and it complicates rolling upgrades via Helm. Unless there’s a strong reason to avoid a controller, consider using a Deployment with labels matching the Service selector to improve resiliency and upgradeability.
Suggested implementation:
```
{{- $fullName := include "kruize.fullname" . -}}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $fullName }}-ui-nginx
namespace: {{ .Release.Namespace }}
labels:
app: kruize-ui-nginx
spec:
replicas: {{ .Values.kruizeUI.replicaCount | default 1 }}
selector:
matchLabels:
app: kruize-ui-nginx
template:
metadata:
labels:
app: kruize-ui-nginx
spec:
containers:
- name: kruize-ui-nginx-container
image: "{{ .Values.kruizeUI.image.repository }}:{{ .Values.kruizeUI.image.tag }}"
imagePullPolicy: {{ .Values.kruizeUI.image.pullPolicy }}
env:
- name: KRUIZE_UI_ENV
```
1. Ensure `.Values.kruizeUI.replicaCount` exists in `values.yaml` (or is added) if not already present; otherwise adjust/remove the `replicas` line.
2. Confirm that any Service targeting this component uses `selector: { app: kruize-ui-nginx }` so it matches the Deployment's Pod template labels.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| kind: ClusterRole | ||
| metadata: | ||
| name: {{ $fullName }}-recommendation-updater | ||
| rules: | ||
| - apiGroups: | ||
| - "" | ||
| resources: | ||
| - pods | ||
| - customresourcedefinitions | ||
| verbs: |
There was a problem hiding this comment.
🚨 suggestion (security): ClusterRoles grant very broad * permissions (including on pods and clusterrolebindings), which is more than the chart likely needs.
This role currently has verbs: ['*'] on pods, customresourcedefinitions, verticalpodautoscalers*, and clusterrolebindings, which is excessive for an app component. Please scope it down to the specific verbs and resources actually required to limit blast radius and ease adoption in locked‑down clusters.
| {{- $fullName := include "kruize.fullname" . -}} | ||
| --- | ||
| apiVersion: v1 | ||
| kind: Pod |
There was a problem hiding this comment.
suggestion (bug_risk): Using a bare Pod for the UI nginx component limits resiliency and makes upgrades less smooth.
Defining this as a standalone Pod means it won’t be recreated if the node is drained or the Pod is deleted, and it complicates rolling upgrades via Helm. Unless there’s a strong reason to avoid a controller, consider using a Deployment with labels matching the Service selector to improve resiliency and upgradeability.
Suggested implementation:
{{- $fullName := include "kruize.fullname" . -}}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $fullName }}-ui-nginx
namespace: {{ .Release.Namespace }}
labels:
app: kruize-ui-nginx
spec:
replicas: {{ .Values.kruizeUI.replicaCount | default 1 }}
selector:
matchLabels:
app: kruize-ui-nginx
template:
metadata:
labels:
app: kruize-ui-nginx
spec:
containers:
- name: kruize-ui-nginx-container
image: "{{ .Values.kruizeUI.image.repository }}:{{ .Values.kruizeUI.image.tag }}"
imagePullPolicy: {{ .Values.kruizeUI.image.pullPolicy }}
env:
- name: KRUIZE_UI_ENV
- Ensure
.Values.kruizeUI.replicaCountexists invalues.yaml(or is added) if not already present; otherwise adjust/remove thereplicasline. - Confirm that any Service targeting this component uses
selector: { app: kruize-ui-nginx }so it matches the Deployment's Pod template labels.
| cpu: "" | ||
|
|
||
| pvc: | ||
| storageSize: "1Gi" |
There was a problem hiding this comment.
Deploying Kruize helm in minikube still uses PV and PVC values of Openshift
$ kubectl describe pvc kruize-db-pv-claim -n monitoring
Name: kruize-db-pv-claim
Namespace: monitoring
StorageClass: manual
Status: Bound
Volume: kruize-db-pv-volume
Labels: app=kruize-db
app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: kruize
meta.helm.sh/release-namespace: monitoring
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 500Mi
Access Modes: RWX
VolumeMode: Filesystem
Used By: kruize-db-6b68d577bc-nbpvk
Events: <none>
$ kubectl describe pv kruize-db-pv-volume -n monitoring
Name: kruize-db-pv-volume
Labels: app=kruize-db
app.kubernetes.io/instance=kruize
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=kruize
app.kubernetes.io/version=0.8.1
helm.sh/chart=kruize-0.1.0
type=local
Annotations: meta.helm.sh/release-name: kruize
meta.helm.sh/release-namespace: monitoring
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Bound
Claim: monitoring/kruize-db-pv-claim
Reclaim Policy: Retain
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 500Mi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /mnt/data/kruize
HostPathType:
Events: <none>
There was a problem hiding this comment.
Passing minikube values overrides the Openshift defaults
helm install kruize ./charts/kruize -f ./charts/kruize/values-minikube.yaml --namespace monitoring
$ kubectl describe pv kruize-db-pv-volume -n monitoring
Name: kruize-db-pv-volume
Labels: app=kruize-db
app.kubernetes.io/instance=kruize
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=kruize
app.kubernetes.io/version=0.8.1
helm.sh/chart=kruize-0.1.0
type=local
Annotations: meta.helm.sh/release-name: kruize
meta.helm.sh/release-namespace: monitoring
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Bound
Claim: monitoring/kruize-db-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /data/postgres/kruize
HostPathType:
Events: <none>
|
|
||
| ## Prerequisites | ||
|
|
||
| - Kubernetes 1.19+ or OpenShift 4.x+ |
There was a problem hiding this comment.
Please document Prometheus as a pre-requisite for minikube/kind clusters
Signed-off-by: Chandrakala Subramanyam <csubrama@redhat.com>
Signed-off-by: Chandrakala Subramanyam <csubrama@redhat.com>
Signed-off-by: Chandrakala Subramanyam <csubrama@redhat.com>
Signed-off-by: Chandrakala Subramanyam <csubrama@redhat.com>
Signed-off-by: Chandrakala Subramanyam <csubrama@redhat.com>
Signed-off-by: Chandrakala Subramanyam <csubrama@redhat.com>
187c742 to
cb3374e
Compare
|
@mbvreddy Can you please review this PR |
| template: | ||
| metadata: | ||
| labels: | ||
| {{- include "kruize.selectorLabels" . | nindent 8 }} |
There was a problem hiding this comment.
Indentation is not proper. Not sure if it cause any problem.
| name: {{ $fullName }}-recommendation-updater | ||
| --- | ||
| {{- if .Values.rbac.create -}} | ||
| {{- if .Values.rbac.create }} |
There was a problem hiding this comment.
This will have document with
---
---
Though it is technically correct, ensure that we use single document separator.
Signed-off-by: Chandrakala Subramanyam <csubrama@ibm.com>
Signed-off-by: Chandrakala Subramanyam <csubrama@ibm.com>
Include helm charts to install Kruize on minikube
This is on top of #2
Summary by Sourcery
Add a Helm chart for deploying Kruize and its database/UI components, including Kubernetes manifests, configuration, and storage for running on standard clusters and minikube.
New Features:
Enhancements:
Documentation: